2016-12-09T20:57:09.441+0100 I CONTROL [initandlisten] MongoDB starting : pid=82572 port=31001 dbpath=./db/31001 64-bit host=Christians-MacBook-Pro.local 2016-12-09T20:57:09.442+0100 I CONTROL [initandlisten] db version v3.2.11 2016-12-09T20:57:09.442+0100 I CONTROL [initandlisten] git version: 009580ad490190ba33d1c6253ebd8d91808923e4 2016-12-09T20:57:09.442+0100 I CONTROL [initandlisten] allocator: system 2016-12-09T20:57:09.442+0100 I CONTROL [initandlisten] modules: none 2016-12-09T20:57:09.442+0100 I CONTROL [initandlisten] build environment: 2016-12-09T20:57:09.442+0100 I CONTROL [initandlisten] distarch: x86_64 2016-12-09T20:57:09.442+0100 I CONTROL [initandlisten] target_arch: x86_64 2016-12-09T20:57:09.442+0100 I CONTROL [initandlisten] options: { net: { port: 31001 }, replication: { replSet: "rs" }, storage: { dbPath: "./db/31001" }, systemLog: { destination: "file", path: "db/31001.log", verbosity: 3 } } 2016-12-09T20:57:09.442+0100 D NETWORK [initandlisten] fd limit hard:9223372036854775807 soft:7168 max conn: 5734 2016-12-09T20:57:09.443+0100 I - [initandlisten] Detected data files in ./db/31001 created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'. 2016-12-09T20:57:09.443+0100 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=9G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), 2016-12-09T20:57:09.924+0100 D COMMAND [WTJournalFlusher] BackgroundJob starting: WTJournalFlusher 2016-12-09T20:57:09.924+0100 D STORAGE [WTJournalFlusher] starting WTJournalFlusher thread 2016-12-09T20:57:09.924+0100 D STORAGE [initandlisten] WiredTigerSizeStorer::loadFrom table:_mdb_catalog -> { numRecords: 7, dataSize: 2032 } 2016-12-09T20:57:09.924+0100 D STORAGE [initandlisten] WiredTigerSizeStorer::loadFrom table:collection-0--295440694794046494 -> { numRecords: 1, dataSize: 61 } 2016-12-09T20:57:09.924+0100 D STORAGE [initandlisten] WiredTigerSizeStorer::loadFrom table:collection-11--295440694794046494 -> { numRecords: 44443, dataSize: 1288847 } 2016-12-09T20:57:09.924+0100 D STORAGE [initandlisten] WiredTigerSizeStorer::loadFrom table:collection-2--295440694794046494 -> { numRecords: 8, dataSize: 12429 } 2016-12-09T20:57:09.925+0100 D STORAGE [initandlisten] WiredTigerSizeStorer::loadFrom table:collection-4--295440694794046494 -> { numRecords: 1, dataSize: 705 } 2016-12-09T20:57:09.925+0100 D STORAGE [initandlisten] WiredTigerSizeStorer::loadFrom table:collection-6--295440694794046494 -> { numRecords: 44515, dataSize: 4673357 } 2016-12-09T20:57:09.925+0100 D STORAGE [initandlisten] WiredTigerSizeStorer::loadFrom table:collection-7--295440694794046494 -> { numRecords: 1, dataSize: 75 } 2016-12-09T20:57:09.925+0100 D STORAGE [initandlisten] WiredTigerSizeStorer::loadFrom table:collection-9--295440694794046494 -> { numRecords: 1, dataSize: 60 } 2016-12-09T20:57:09.925+0100 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:_mdb_catalog config: type=file,memory_page_max=10m,split_pct=90,leaf_value_max=64MB,checksum=on,block_compressor=snappy,,key_format=q,value_format=u,app_metadata=(formatVersion=1) 2016-12-09T20:57:09.925+0100 D STORAGE [initandlisten] WT begin_transaction 2016-12-09T20:57:09.925+0100 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:_mdb_catalog ok range 1 -> 1 current: 1 2016-12-09T20:57:09.931+0100 D STORAGE [initandlisten] looking up metadata for: app.test @ RecordId(7) 2016-12-09T20:57:09.931+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-12--295440694794046494" }, ns: "app.test", ident: "collection-11--295440694794046494" } 2016-12-09T20:57:09.931+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.931+0100 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-11--295440694794046494 ok range 1 -> 1 current: 1 2016-12-09T20:57:09.931+0100 D STORAGE [initandlisten] looking up metadata for: local.me @ RecordId(1) 2016-12-09T20:57:09.931+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-1--295440694794046494" }, ns: "local.me", ident: "collection-0--295440694794046494" } 2016-12-09T20:57:09.932+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.932+0100 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-0--295440694794046494 ok range 1 -> 1 current: 1 2016-12-09T20:57:09.932+0100 D STORAGE [initandlisten] looking up metadata for: local.oplog.rs @ RecordId(4) 2016-12-09T20:57:09.932+0100 D STORAGE [initandlisten] fetched CCE metadata: { ns: "local.oplog.rs", ident: "collection-6--295440694794046494", md: { ns: "local.oplog.rs", options: { capped: true, size: 201326592, autoIndexId: false }, indexes: [] } } 2016-12-09T20:57:09.932+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.oplog.rs", options: { capped: true, size: 201326592, autoIndexId: false }, indexes: [] } 2016-12-09T20:57:09.932+0100 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-6--295440694794046494 ok range 1 -> 1 current: 1 2016-12-09T20:57:09.933+0100 I STORAGE [initandlisten] Starting WiredTigerRecordStoreThread local.oplog.rs 2016-12-09T20:57:09.933+0100 D COMMAND [WT RecordStoreThread: local.oplog.rs] BackgroundJob starting: WT RecordStoreThread: local.oplog.rs 2016-12-09T20:57:09.933+0100 I STORAGE [initandlisten] The size storer reports that the oplog contains 44515 records totaling to 4673357 bytes 2016-12-09T20:57:09.933+0100 D STORAGE [WT RecordStoreThread: local.oplog.rs] no global storage engine yet 2016-12-09T20:57:09.933+0100 I STORAGE [initandlisten] Sampling from the oplog between Dec 8 18:04:19:1 and Dec 9 20:55:30:2 to determine where to place markers for truncation 2016-12-09T20:57:09.933+0100 I STORAGE [initandlisten] Taking 2 samples and assuming that each section of oplog contains approximately 174336 records totaling to 18302468 bytes 2016-12-09T20:57:09.933+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(6) 2016-12-09T20:57:09.933+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-10--295440694794046494" }, ns: "local.replset.election", ident: "collection-9--295440694794046494" } 2016-12-09T20:57:09.933+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.933+0100 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-9--295440694794046494 ok range 1 -> 1 current: 1 2016-12-09T20:57:09.933+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(5) 2016-12-09T20:57:09.934+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-8--295440694794046494" }, ns: "local.replset.minvalid", ident: "collection-7--295440694794046494" } 2016-12-09T20:57:09.934+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.934+0100 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-7--295440694794046494 ok range 1 -> 1 current: 1 2016-12-09T20:57:09.934+0100 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(2) 2016-12-09T20:57:09.934+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-3--295440694794046494" }, ns: "local.startup_log", ident: "collection-2--295440694794046494" } 2016-12-09T20:57:09.934+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.934+0100 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-2--295440694794046494 ok range 1 -> 1 current: 1 2016-12-09T20:57:09.935+0100 D STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(3) 2016-12-09T20:57:09.935+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-5--295440694794046494" }, ns: "local.system.replset", ident: "collection-4--295440694794046494" } 2016-12-09T20:57:09.935+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.935+0100 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:collection-4--295440694794046494 ok range 1 -> 1 current: 1 2016-12-09T20:57:09.935+0100 D STORAGE [initandlisten] WT commit_transaction 2016-12-09T20:57:09.935+0100 D STORAGE [initandlisten] WT begin_transaction 2016-12-09T20:57:09.941+0100 D STORAGE [initandlisten] WT rollback_transaction 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] enter repairDatabases (to check pdfile version #) 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] looking up metadata for: local.me @ RecordId(1) 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] WT begin_transaction 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-1--295440694794046494" }, ns: "local.me", ident: "collection-0--295440694794046494" } 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] looking up metadata for: local.me @ RecordId(1) 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-1--295440694794046494" }, ns: "local.me", ident: "collection-0--295440694794046494" } 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] looking up metadata for: local.me @ RecordId(1) 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-1--295440694794046494" }, ns: "local.me", ident: "collection-0--295440694794046494" } 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] looking up metadata for: local.me @ RecordId(1) 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-1--295440694794046494" }, ns: "local.me", ident: "collection-0--295440694794046494" } 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] looking up metadata for: local.me @ RecordId(1) 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-1--295440694794046494" }, ns: "local.me", ident: "collection-0--295440694794046494" } 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] looking up metadata for: local.me @ RecordId(1) 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-1--295440694794046494" }, ns: "local.me", ident: "collection-0--295440694794046494" } 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] looking up metadata for: local.me @ RecordId(1) 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-1--295440694794046494 ok range 6 -> 6 current: 6 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] looking up metadata for: local.me @ RecordId(1) 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-1--295440694794046494" }, ns: "local.me", ident: "collection-0--295440694794046494" } 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] looking up metadata for: local.me @ RecordId(1) 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-1--295440694794046494" }, ns: "local.me", ident: "collection-0--295440694794046494" } 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] looking up metadata for: local.me @ RecordId(1) 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-1--295440694794046494" }, ns: "local.me", ident: "collection-0--295440694794046494" } 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] local.me: clearing plan cache - collection info cache reset 2016-12-09T20:57:09.944+0100 D STORAGE [initandlisten] looking up metadata for: local.oplog.rs @ RecordId(4) 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] fetched CCE metadata: { ns: "local.oplog.rs", ident: "collection-6--295440694794046494", md: { ns: "local.oplog.rs", options: { capped: true, size: 201326592, autoIndexId: false }, indexes: [] } } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.oplog.rs", options: { capped: true, size: 201326592, autoIndexId: false }, indexes: [] } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] looking up metadata for: local.oplog.rs @ RecordId(4) 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] fetched CCE metadata: { ns: "local.oplog.rs", ident: "collection-6--295440694794046494", md: { ns: "local.oplog.rs", options: { capped: true, size: 201326592, autoIndexId: false }, indexes: [] } } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.oplog.rs", options: { capped: true, size: 201326592, autoIndexId: false }, indexes: [] } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] looking up metadata for: local.oplog.rs @ RecordId(4) 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] fetched CCE metadata: { ns: "local.oplog.rs", ident: "collection-6--295440694794046494", md: { ns: "local.oplog.rs", options: { capped: true, size: 201326592, autoIndexId: false }, indexes: [] } } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.oplog.rs", options: { capped: true, size: 201326592, autoIndexId: false }, indexes: [] } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] looking up metadata for: local.oplog.rs @ RecordId(4) 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] fetched CCE metadata: { ns: "local.oplog.rs", ident: "collection-6--295440694794046494", md: { ns: "local.oplog.rs", options: { capped: true, size: 201326592, autoIndexId: false }, indexes: [] } } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.oplog.rs", options: { capped: true, size: 201326592, autoIndexId: false }, indexes: [] } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] local.oplog.rs: clearing plan cache - collection info cache reset 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(6) 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-10--295440694794046494" }, ns: "local.replset.election", ident: "collection-9--295440694794046494" } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(6) 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-10--295440694794046494" }, ns: "local.replset.election", ident: "collection-9--295440694794046494" } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(6) 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-10--295440694794046494" }, ns: "local.replset.election", ident: "collection-9--295440694794046494" } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(6) 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-10--295440694794046494" }, ns: "local.replset.election", ident: "collection-9--295440694794046494" } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(6) 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-10--295440694794046494" }, ns: "local.replset.election", ident: "collection-9--295440694794046494" } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(6) 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-10--295440694794046494" }, ns: "local.replset.election", ident: "collection-9--295440694794046494" } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(6) 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-10--295440694794046494 ok range 6 -> 6 current: 6 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(6) 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-10--295440694794046494" }, ns: "local.replset.election", ident: "collection-9--295440694794046494" } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(6) 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-10--295440694794046494" }, ns: "local.replset.election", ident: "collection-9--295440694794046494" } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(6) 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-10--295440694794046494" }, ns: "local.replset.election", ident: "collection-9--295440694794046494" } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] local.replset.election: clearing plan cache - collection info cache reset 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(5) 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-8--295440694794046494" }, ns: "local.replset.minvalid", ident: "collection-7--295440694794046494" } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(5) 2016-12-09T20:57:09.945+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-8--295440694794046494" }, ns: "local.replset.minvalid", ident: "collection-7--295440694794046494" } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(5) 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-8--295440694794046494" }, ns: "local.replset.minvalid", ident: "collection-7--295440694794046494" } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(5) 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-8--295440694794046494" }, ns: "local.replset.minvalid", ident: "collection-7--295440694794046494" } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(5) 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-8--295440694794046494" }, ns: "local.replset.minvalid", ident: "collection-7--295440694794046494" } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(5) 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-8--295440694794046494" }, ns: "local.replset.minvalid", ident: "collection-7--295440694794046494" } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(5) 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-8--295440694794046494 ok range 6 -> 6 current: 6 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(5) 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-8--295440694794046494" }, ns: "local.replset.minvalid", ident: "collection-7--295440694794046494" } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(5) 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-8--295440694794046494" }, ns: "local.replset.minvalid", ident: "collection-7--295440694794046494" } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(5) 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-8--295440694794046494" }, ns: "local.replset.minvalid", ident: "collection-7--295440694794046494" } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] local.replset.minvalid: clearing plan cache - collection info cache reset 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(2) 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-3--295440694794046494" }, ns: "local.startup_log", ident: "collection-2--295440694794046494" } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(2) 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-3--295440694794046494" }, ns: "local.startup_log", ident: "collection-2--295440694794046494" } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(2) 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-3--295440694794046494" }, ns: "local.startup_log", ident: "collection-2--295440694794046494" } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(2) 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-3--295440694794046494" }, ns: "local.startup_log", ident: "collection-2--295440694794046494" } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(2) 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-3--295440694794046494" }, ns: "local.startup_log", ident: "collection-2--295440694794046494" } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(2) 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-3--295440694794046494" }, ns: "local.startup_log", ident: "collection-2--295440694794046494" } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.946+0100 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(2) 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-3--295440694794046494 ok range 6 -> 6 current: 6 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(2) 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-3--295440694794046494" }, ns: "local.startup_log", ident: "collection-2--295440694794046494" } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(2) 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-3--295440694794046494" }, ns: "local.startup_log", ident: "collection-2--295440694794046494" } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(2) 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-3--295440694794046494" }, ns: "local.startup_log", ident: "collection-2--295440694794046494" } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] local.startup_log: clearing plan cache - collection info cache reset 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(3) 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-5--295440694794046494" }, ns: "local.system.replset", ident: "collection-4--295440694794046494" } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(3) 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-5--295440694794046494" }, ns: "local.system.replset", ident: "collection-4--295440694794046494" } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(3) 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-5--295440694794046494" }, ns: "local.system.replset", ident: "collection-4--295440694794046494" } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(3) 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-5--295440694794046494" }, ns: "local.system.replset", ident: "collection-4--295440694794046494" } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(3) 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-5--295440694794046494" }, ns: "local.system.replset", ident: "collection-4--295440694794046494" } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(3) 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-5--295440694794046494" }, ns: "local.system.replset", ident: "collection-4--295440694794046494" } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(3) 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-5--295440694794046494 ok range 6 -> 6 current: 6 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(3) 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-5--295440694794046494" }, ns: "local.system.replset", ident: "collection-4--295440694794046494" } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(3) 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-5--295440694794046494" }, ns: "local.system.replset", ident: "collection-4--295440694794046494" } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(3) 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-5--295440694794046494" }, ns: "local.system.replset", ident: "collection-4--295440694794046494" } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] local.system.replset: clearing plan cache - collection info cache reset 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] Checking node for SERVER-23299 eligibility 2016-12-09T20:57:09.947+0100 D STORAGE [initandlisten] Checking node for SERVER-23299 applicability - reading startup log 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] Checking node for SERVER-23299 applicability - checking version 3.2.x for x in [0, 4] 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] Recovering database: app 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] looking up metadata for: app.test @ RecordId(7) 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-12--295440694794046494" }, ns: "app.test", ident: "collection-11--295440694794046494" } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] looking up metadata for: app.test @ RecordId(7) 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-12--295440694794046494" }, ns: "app.test", ident: "collection-11--295440694794046494" } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] looking up metadata for: app.test @ RecordId(7) 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-12--295440694794046494" }, ns: "app.test", ident: "collection-11--295440694794046494" } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] looking up metadata for: app.test @ RecordId(7) 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-12--295440694794046494" }, ns: "app.test", ident: "collection-11--295440694794046494" } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] looking up metadata for: app.test @ RecordId(7) 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-12--295440694794046494" }, ns: "app.test", ident: "collection-11--295440694794046494" } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] looking up metadata for: app.test @ RecordId(7) 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-12--295440694794046494" }, ns: "app.test", ident: "collection-11--295440694794046494" } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] looking up metadata for: app.test @ RecordId(7) 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] WiredTigerUtil::checkApplicationMetadataFormatVersion uri: table:index-12--295440694794046494 ok range 6 -> 6 current: 6 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] looking up metadata for: app.test @ RecordId(7) 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-12--295440694794046494" }, ns: "app.test", ident: "collection-11--295440694794046494" } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] looking up metadata for: app.test @ RecordId(7) 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-12--295440694794046494" }, ns: "app.test", ident: "collection-11--295440694794046494" } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] looking up metadata for: app.test @ RecordId(7) 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-12--295440694794046494" }, ns: "app.test", ident: "collection-11--295440694794046494" } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] app.test: clearing plan cache - collection info cache reset 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] Recovering database: local 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] looking up metadata for: local.me @ RecordId(1) 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-1--295440694794046494" }, ns: "local.me", ident: "collection-0--295440694794046494" } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] looking up metadata for: local.oplog.rs @ RecordId(4) 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] fetched CCE metadata: { ns: "local.oplog.rs", ident: "collection-6--295440694794046494", md: { ns: "local.oplog.rs", options: { capped: true, size: 201326592, autoIndexId: false }, indexes: [] } } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.oplog.rs", options: { capped: true, size: 201326592, autoIndexId: false }, indexes: [] } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.election @ RecordId(6) 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-10--295440694794046494" }, ns: "local.replset.election", ident: "collection-9--295440694794046494" } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] looking up metadata for: local.replset.minvalid @ RecordId(5) 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-8--295440694794046494" }, ns: "local.replset.minvalid", ident: "collection-7--295440694794046494" } 2016-12-09T20:57:09.948+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.949+0100 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ RecordId(2) 2016-12-09T20:57:09.949+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-3--295440694794046494" }, ns: "local.startup_log", ident: "collection-2--295440694794046494" } 2016-12-09T20:57:09.949+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.949+0100 D STORAGE [initandlisten] looking up metadata for: local.system.replset @ RecordId(3) 2016-12-09T20:57:09.949+0100 D STORAGE [initandlisten] fetched CCE metadata: { md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-5--295440694794046494" }, ns: "local.system.replset", ident: "collection-4--295440694794046494" } 2016-12-09T20:57:09.949+0100 D STORAGE [initandlisten] returning metadata: md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:57:09.949+0100 D STORAGE [initandlisten] WT commit_transaction 2016-12-09T20:57:09.949+0100 D STORAGE [initandlisten] done repairDatabases 2016-12-09T20:57:09.949+0100 D QUERY [initandlisten] Running query: query: {} sort: {} projection: {} 2016-12-09T20:57:09.949+0100 D QUERY [initandlisten] Collection admin.system.roles does not exist. Using EOF plan: query: {} sort: {} projection: {} 2016-12-09T20:57:09.949+0100 I COMMAND [initandlisten] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:0 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:0 reslen:20 locks:{ Global: { acquireCount: { r: 8, W: 2 } }, Database: { acquireCount: { r: 3 } }, Collection: { acquireCount: { r: 3 } } } 0ms 2016-12-09T20:57:09.949+0100 D INDEX [initandlisten] IndexRebuilder::checkNS: app.test 2016-12-09T20:57:09.949+0100 D INDEX [initandlisten] IndexRebuilder::checkNS: local.me 2016-12-09T20:57:09.949+0100 D INDEX [initandlisten] IndexRebuilder::checkNS: local.oplog.rs 2016-12-09T20:57:09.949+0100 D INDEX [initandlisten] IndexRebuilder::checkNS: local.replset.election 2016-12-09T20:57:09.949+0100 D INDEX [initandlisten] IndexRebuilder::checkNS: local.replset.minvalid 2016-12-09T20:57:09.949+0100 D INDEX [initandlisten] IndexRebuilder::checkNS: local.startup_log 2016-12-09T20:57:09.949+0100 D INDEX [initandlisten] IndexRebuilder::checkNS: local.system.replset 2016-12-09T20:57:09.949+0100 D INDEX [initandlisten] checking complete 2016-12-09T20:57:09.949+0100 D STORAGE [initandlisten] WT begin_transaction 2016-12-09T20:57:09.949+0100 D STORAGE [initandlisten] WT rollback_transaction 2016-12-09T20:57:09.949+0100 D STORAGE [initandlisten] WT begin_transaction 2016-12-09T20:57:09.949+0100 D STORAGE [initandlisten] WT rollback_transaction 2016-12-09T20:57:09.949+0100 D STORAGE [initandlisten] WT begin_transaction 2016-12-09T20:57:09.949+0100 D STORAGE [initandlisten] WT rollback_transaction 2016-12-09T20:57:09.949+0100 D EXECUTOR [replExecDBWorker-0] starting thread in pool replExecDBWorker-Pool 2016-12-09T20:57:09.949+0100 D ASIO [NetworkInterfaceASIO-Replication-0] The NetworkInterfaceASIO worker thread is spinning up 2016-12-09T20:57:09.949+0100 D EXECUTOR [replExecDBWorker-1] starting thread in pool replExecDBWorker-Pool 2016-12-09T20:57:09.949+0100 D EXECUTOR [replExecDBWorker-0] waiting for work; I am one of 3 thread(s); the minimum number of threads is 3 2016-12-09T20:57:09.949+0100 D EXECUTOR [replExecDBWorker-2] starting thread in pool replExecDBWorker-Pool 2016-12-09T20:57:09.949+0100 D STORAGE [initandlisten] WT begin_transaction 2016-12-09T20:57:09.949+0100 D EXECUTOR [replExecDBWorker-1] waiting for work; I am one of 3 thread(s); the minimum number of threads is 3 2016-12-09T20:57:09.949+0100 D STORAGE [initandlisten] WT rollback_transaction 2016-12-09T20:57:09.949+0100 D EXECUTOR [replExecDBWorker-2] waiting for work; I am one of 3 thread(s); the minimum number of threads is 3 2016-12-09T20:57:09.949+0100 D REPL [initandlisten] returning initial sync flag value of 0 2016-12-09T20:57:09.949+0100 D REPL [initandlisten] setting minvalid to at least: (term: -1, timestamp: Jan 1 01:00:00:0)({ ts: Timestamp 0|0, t: -1 }) 2016-12-09T20:57:09.949+0100 D QUERY [initandlisten] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:57:09.949+0100 D STORAGE [initandlisten] WT begin_transaction 2016-12-09T20:57:09.949+0100 D STORAGE [initandlisten] WT rollback_transaction 2016-12-09T20:57:09.950+0100 D STORAGE [initandlisten] WT begin_transaction 2016-12-09T20:57:09.950+0100 D STORAGE [initandlisten] WT rollback_transaction 2016-12-09T20:57:09.950+0100 D REPL [initandlisten] returning oplog delete from point: 0:0 2016-12-09T20:57:09.950+0100 D STORAGE [initandlisten] WT begin_transaction 2016-12-09T20:57:09.950+0100 D STORAGE [initandlisten] WT rollback_transaction 2016-12-09T20:57:09.950+0100 D REPL [initandlisten] setting oplog delete from point to: Jan 1 01:00:00:0 2016-12-09T20:57:09.950+0100 D QUERY [initandlisten] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:57:09.950+0100 D STORAGE [initandlisten] WT begin_transaction 2016-12-09T20:57:09.950+0100 D STORAGE [initandlisten] WT rollback_transaction 2016-12-09T20:57:09.950+0100 D STORAGE [initandlisten] WT begin_transaction 2016-12-09T20:57:09.950+0100 D STORAGE [initandlisten] WT rollback_transaction 2016-12-09T20:57:09.950+0100 D REPL [initandlisten] returning initial sync flag value of 0 2016-12-09T20:57:09.950+0100 D STORAGE [initandlisten] WT begin_transaction 2016-12-09T20:57:09.950+0100 D STORAGE [initandlisten] WT rollback_transaction 2016-12-09T20:57:09.950+0100 D COMMAND [TTLMonitor] BackgroundJob starting: TTLMonitor 2016-12-09T20:57:09.950+0100 D COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner 2016-12-09T20:57:09.950+0100 D COMMAND [ClientCursorMonitor] BackgroundJob starting: ClientCursorMonitor 2016-12-09T20:57:09.950+0100 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory './db/31001/diagnostic.data' 2016-12-09T20:57:09.950+0100 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker 2016-12-09T20:57:09.950+0100 D NETWORK [HostnameCanonicalizationWorker] Hostname Canonicalizer is acquiring host FQDNs 2016-12-09T20:57:09.950+0100 D STORAGE [initandlisten] WT begin_transaction 2016-12-09T20:57:09.950+0100 D STORAGE [initandlisten] WT commit_transaction 2016-12-09T20:57:09.950+0100 I NETWORK [initandlisten] waiting for connections on port 31001 2016-12-09T20:57:09.950+0100 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2016-12-09T20:57:09.951+0100 D NETWORK [ReplicationExecutor] connected to server localhost:12345 (127.0.0.1) 2016-12-09T20:57:09.952+0100 D NETWORK [ReplicationExecutor] getBoundAddrs(): [ 127.0.0.1] [ 192.168.1.105] [ 10.50.0.163] 2016-12-09T20:57:09.952+0100 D NETWORK [ReplicationExecutor] getAddrsForHost("localhost:31001"): [ 127.0.0.1] 2016-12-09T20:57:09.952+0100 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2016-12-09T20:57:09.953+0100 W NETWORK [ReplicationExecutor] Failed to connect to 127.0.0.1:31002, reason: errno:61 Connection refused 2016-12-09T20:57:09.953+0100 D REPL [ReplicationExecutor] Updated term in topology coordinator to 0 due to new config 2016-12-09T20:57:09.953+0100 I REPL [ReplicationExecutor] New replica set config in use: { _id: "rs", version: 7, protocolVersion: 1, members: [ { _id: 0, host: "localhost:12345", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 100.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "localhost:31001", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "localhost:31002", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5849929337536d4b3f1a9485') } } 2016-12-09T20:57:09.953+0100 I REPL [ReplicationExecutor] This node is localhost:31001 in the config 2016-12-09T20:57:09.953+0100 D NETWORK [HostnameCanonicalizationWorker] Hostname Canonicalizer acquired FQDNs 2016-12-09T20:57:09.953+0100 I REPL [ReplicationExecutor] transition to STARTUP2 2016-12-09T20:57:09.953+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:12345 at 2016-12-09T19:57:09.953Z 2016-12-09T20:57:09.953+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:57:09.953Z 2016-12-09T20:57:09.953+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:57:09.953+0100 2016-12-09T20:57:09.953+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:57:09.953+0100 2016-12-09T20:57:09.953+0100 D REPL [ReplicationExecutor] earliest member 0 date: 2016-12-09T20:57:09.953+0100 2016-12-09T20:57:09.953+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:57:19.953+0100 2016-12-09T20:57:09.953+0100 D REPL [ReplicationExecutor] Current term is now 87 2016-12-09T20:57:09.953+0100 I REPL [ReplicationExecutor] Starting replication applier threads 2016-12-09T20:57:09.953+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 1 -- target:localhost:12345 db:admin expDate:2016-12-09T20:57:19.953+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 87 } 2016-12-09T20:57:09.953+0100 D STORAGE [rsSync] WT begin_transaction 2016-12-09T20:57:09.953+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] The NetworkInterfaceASIO worker thread is spinning up 2016-12-09T20:57:09.953+0100 D EXECUTOR [rsBackgroundSync-0] starting thread in pool rsBackgroundSync 2016-12-09T20:57:09.953+0100 D STORAGE [rsSync] WT rollback_transaction 2016-12-09T20:57:09.953+0100 D STORAGE [rsBackgroundSync] WT begin_transaction 2016-12-09T20:57:09.953+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 3 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:19.953+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 87 } 2016-12-09T20:57:09.953+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to localhost:12345 2016-12-09T20:57:09.953+0100 D EXECUTOR [rsBackgroundSync-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2016-12-09T20:57:09.953+0100 D REPL [rsSync] returning initial sync flag value of 0 2016-12-09T20:57:09.953+0100 D STORAGE [rsBackgroundSync] WT rollback_transaction 2016-12-09T20:57:09.953+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to localhost:31002 2016-12-09T20:57:09.953+0100 I REPL [ReplicationExecutor] transition to RECOVERING 2016-12-09T20:57:09.953+0100 D REPL [rsBackgroundSync] bgsync fetch queue set to: (term: 87, timestamp: Dec 9 20:55:30:2) 4187683196111166928 2016-12-09T20:57:09.953+0100 D STORAGE [rsBackgroundSync] WT begin_transaction 2016-12-09T20:57:09.953+0100 D STORAGE [rsBackgroundSync] WT rollback_transaction 2016-12-09T20:57:09.953+0100 D REPL [rsBackgroundSync] returning minvalid: (term: 87, timestamp: Dec 9 20:55:30:2)({ ts: Timestamp 1481313330000|2, t: 87 }) 2016-12-09T20:57:09.953+0100 D EXECUTOR [repl writer worker 0] starting thread in pool repl writer worker Pool 2016-12-09T20:57:09.953+0100 D EXECUTOR [repl writer worker 1] starting thread in pool repl writer worker Pool 2016-12-09T20:57:09.953+0100 D EXECUTOR [repl writer worker 2] starting thread in pool repl writer worker Pool 2016-12-09T20:57:09.953+0100 D EXECUTOR [repl writer worker 3] starting thread in pool repl writer worker Pool 2016-12-09T20:57:09.953+0100 D EXECUTOR [repl writer worker 4] starting thread in pool repl writer worker Pool 2016-12-09T20:57:09.953+0100 D EXECUTOR [repl writer worker 5] starting thread in pool repl writer worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl writer worker 6] starting thread in pool repl writer worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl writer worker 7] starting thread in pool repl writer worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl writer worker 8] starting thread in pool repl writer worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl writer worker 9] starting thread in pool repl writer worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl writer worker 10] starting thread in pool repl writer worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl writer worker 11] starting thread in pool repl writer worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl writer worker 13] starting thread in pool repl writer worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl writer worker 12] starting thread in pool repl writer worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl writer worker 14] starting thread in pool repl writer worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl writer worker 15] starting thread in pool repl writer worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl writer worker 0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl prefetch worker 0] starting thread in pool repl prefetch worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl prefetch worker 1] starting thread in pool repl prefetch worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl prefetch worker 2] starting thread in pool repl prefetch worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl prefetch worker 4] starting thread in pool repl prefetch worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl prefetch worker 3] starting thread in pool repl prefetch worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl prefetch worker 5] starting thread in pool repl prefetch worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl prefetch worker 6] starting thread in pool repl prefetch worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl prefetch worker 7] starting thread in pool repl prefetch worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl prefetch worker 8] starting thread in pool repl prefetch worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl prefetch worker 9] starting thread in pool repl prefetch worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl prefetch worker 10] starting thread in pool repl prefetch worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl prefetch worker 11] starting thread in pool repl prefetch worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl prefetch worker 12] starting thread in pool repl prefetch worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl prefetch worker 13] starting thread in pool repl prefetch worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl prefetch worker 14] starting thread in pool repl prefetch worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl prefetch worker 15] starting thread in pool repl prefetch worker Pool 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl writer worker 1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl prefetch worker 0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.954+0100 D STORAGE [ReplBatcher] looking up metadata for: local.oplog.rs @ RecordId(4) 2016-12-09T20:57:09.954+0100 D STORAGE [rsSync] WT begin_transaction 2016-12-09T20:57:09.954+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 2 on host localhost:12345 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl writer worker 2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl prefetch worker 1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.954+0100 D STORAGE [ReplBatcher] WT begin_transaction 2016-12-09T20:57:09.954+0100 D REPL [rsSync] returning minvalid: (term: 87, timestamp: Dec 9 20:55:30:2)({ ts: Timestamp 1481313330000|2, t: 87 }) 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl writer worker 3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl prefetch worker 2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.954+0100 D STORAGE [ReplBatcher] fetched CCE metadata: { ns: "local.oplog.rs", ident: "collection-6--295440694794046494", md: { ns: "local.oplog.rs", options: { capped: true, size: 201326592, autoIndexId: false }, indexes: [] } } 2016-12-09T20:57:09.954+0100 I REPL [ReplicationExecutor] transition to SECONDARY 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl writer worker 4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl prefetch worker 4] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.954+0100 D STORAGE [ReplBatcher] returning metadata: md: { ns: "local.oplog.rs", options: { capped: true, size: 201326592, autoIndexId: false }, indexes: [] } 2016-12-09T20:57:09.954+0100 D STORAGE [rsSync] WT rollback_transaction 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl writer worker 5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.954+0100 D EXECUTOR [repl prefetch worker 3] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.955+0100 D STORAGE [ReplBatcher] WT rollback_transaction 2016-12-09T20:57:09.955+0100 D EXECUTOR [repl writer worker 6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.955+0100 D EXECUTOR [repl prefetch worker 5] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.955+0100 D EXECUTOR [repl writer worker 7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.955+0100 D EXECUTOR [repl prefetch worker 6] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.955+0100 D EXECUTOR [repl writer worker 8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.955+0100 D EXECUTOR [repl prefetch worker 7] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.955+0100 D EXECUTOR [repl writer worker 9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.955+0100 D EXECUTOR [repl writer worker 10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.955+0100 D EXECUTOR [repl prefetch worker 8] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.955+0100 D EXECUTOR [repl writer worker 11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.955+0100 D EXECUTOR [repl writer worker 13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.955+0100 D EXECUTOR [repl prefetch worker 9] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.955+0100 D EXECUTOR [repl writer worker 12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.955+0100 D EXECUTOR [repl prefetch worker 10] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.955+0100 D EXECUTOR [repl writer worker 14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.955+0100 D EXECUTOR [repl prefetch worker 11] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.955+0100 D EXECUTOR [repl writer worker 15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.955+0100 D EXECUTOR [repl prefetch worker 12] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.955+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Failed to connect to localhost:31002 - HostUnreachable: Connection refused 2016-12-09T20:57:09.955+0100 D EXECUTOR [repl prefetch worker 13] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.955+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 4 -- target:localhost:31002 db:admin cmd:{ isMaster: 1 } reason: HostUnreachable: Connection refused 2016-12-09T20:57:09.955+0100 D EXECUTOR [repl prefetch worker 14] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.955+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to get connection from pool for request 3: HostUnreachable: Connection refused 2016-12-09T20:57:09.955+0100 D EXECUTOR [repl prefetch worker 15] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:09.955+0100 I REPL [ReplicationExecutor] Error in heartbeat request to localhost:31002; HostUnreachable: Connection refused 2016-12-09T20:57:09.955+0100 D REPL [ReplicationExecutor] Bad heartbeat response from localhost:31002; trying again; Retries left: 1; 2ms have already elapsed 2016-12-09T20:57:09.955+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:57:09.955Z 2016-12-09T20:57:09.955+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 5 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:19.953+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 87 } 2016-12-09T20:57:09.955+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to localhost:31002 2016-12-09T20:57:09.955+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:12345 2016-12-09T20:57:09.955+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 1 -- target:localhost:12345 db:admin expDate:2016-12-09T20:57:19.953+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 87 } 2016-12-09T20:57:09.955+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 1 on host localhost:12345 2016-12-09T20:57:09.956+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 1 out: Operation aborted. 2016-12-09T20:57:09.956+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:0, msg: 2016-12-09T20:57:09.956+0100 I REPL [ReplicationExecutor] Member localhost:12345 is now in state SECONDARY 2016-12-09T20:57:09.956+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:12345 at 2016-12-09T19:57:14.956Z 2016-12-09T20:57:09.956+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Failed to connect to localhost:31002 - HostUnreachable: Connection refused 2016-12-09T20:57:09.956+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 6 -- target:localhost:31002 db:admin cmd:{ isMaster: 1 } reason: HostUnreachable: Connection refused 2016-12-09T20:57:09.956+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to get connection from pool for request 5: HostUnreachable: Connection refused 2016-12-09T20:57:09.956+0100 I REPL [ReplicationExecutor] Error in heartbeat request to localhost:31002; HostUnreachable: Connection refused 2016-12-09T20:57:09.956+0100 D REPL [ReplicationExecutor] Bad heartbeat response from localhost:31002; trying again; Retries left: 0; 3ms have already elapsed 2016-12-09T20:57:09.956+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:57:09.956Z 2016-12-09T20:57:09.956+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 8 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:19.953+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 87 } 2016-12-09T20:57:09.956+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to localhost:31002 2016-12-09T20:57:09.956+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Failed to connect to localhost:31002 - HostUnreachable: Connection refused 2016-12-09T20:57:09.956+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 9 -- target:localhost:31002 db:admin cmd:{ isMaster: 1 } reason: HostUnreachable: Connection refused 2016-12-09T20:57:09.956+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to get connection from pool for request 8: HostUnreachable: Connection refused 2016-12-09T20:57:09.956+0100 I REPL [ReplicationExecutor] Error in heartbeat request to localhost:31002; HostUnreachable: Connection refused 2016-12-09T20:57:09.956+0100 D REPL [ReplicationExecutor] setDownValues: heartbeat response failed for member _id:2, msg: Connection refused 2016-12-09T20:57:09.956+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:57:14.956Z 2016-12-09T20:57:10.001+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:10.002+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:11.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:11.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:12.003+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:12.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:12.796+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:58880 #1 (1 connection now open) 2016-12-09T20:57:12.796+0100 D COMMAND [conn1] run command admin.$cmd { _isSelf: 1 } 2016-12-09T20:57:12.796+0100 I COMMAND [conn1] command admin.$cmd command: _isSelf { _isSelf: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:53 locks:{} protocol:op_query 0ms 2016-12-09T20:57:12.796+0100 D NETWORK [conn1] Socket recv() conn closed? 127.0.0.1:58880 2016-12-09T20:57:12.796+0100 D NETWORK [conn1] SocketException: remote: 127.0.0.1:58880 error: 9001 socket exception [CLOSED] server [127.0.0.1:58880] 2016-12-09T20:57:12.796+0100 I NETWORK [conn1] end connection 127.0.0.1:58880 (0 connections now open) 2016-12-09T20:57:12.799+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:58885 #2 (1 connection now open) 2016-12-09T20:57:12.799+0100 D COMMAND [conn2] run command admin.$cmd { isMaster: 1 } 2016-12-09T20:57:12.799+0100 I COMMAND [conn2] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:327 locks:{} protocol:op_query 0ms 2016-12-09T20:57:12.799+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 87 } 2016-12-09T20:57:12.799+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:57:12.799+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 87 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:57:13.001+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:13.002+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:13.943+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:58888 #3 (2 connections now open) 2016-12-09T20:57:13.943+0100 D COMMAND [conn3] run command admin.$cmd { isMaster: 1 } 2016-12-09T20:57:13.943+0100 I COMMAND [conn3] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:327 locks:{} protocol:op_query 0ms 2016-12-09T20:57:13.944+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 87 } 2016-12-09T20:57:13.944+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:57:13.944+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 87 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:57:14.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:14.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:14.092+0100 D COMMAND [conn3] run command admin.$cmd { replSetRequestVotes: 1, setName: "rs", dryRun: true, term: 87, candidateIndex: 0, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313330000|2, t: 87 } } 2016-12-09T20:57:14.092+0100 D COMMAND [conn3] command: replSetRequestVotes 2016-12-09T20:57:14.093+0100 D QUERY [conn3] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:57:14.093+0100 D STORAGE [conn3] WT begin_transaction 2016-12-09T20:57:14.093+0100 D WRITE [conn3] update validate options -- updatedFields: Fields:[ ] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:57:14.093+0100 D STORAGE [conn3] WT commit_transaction 2016-12-09T20:57:14.093+0100 D STORAGE [conn3] WT begin_transaction 2016-12-09T20:57:14.093+0100 D STORAGE [conn3] WT rollback_transaction 2016-12-09T20:57:14.093+0100 I COMMAND [conn3] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "rs", dryRun: true, term: 87, candidateIndex: 0, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313330000|2, t: 87 } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:63 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms 2016-12-09T20:57:14.094+0100 D COMMAND [conn3] run command admin.$cmd { replSetRequestVotes: 1, setName: "rs", dryRun: false, term: 88, candidateIndex: 0, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313330000|2, t: 87 } } 2016-12-09T20:57:14.094+0100 D COMMAND [conn3] command: replSetRequestVotes 2016-12-09T20:57:14.094+0100 D QUERY [conn3] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:57:14.094+0100 D STORAGE [conn3] WT begin_transaction 2016-12-09T20:57:14.095+0100 D WRITE [conn3] update validate options -- updatedFields: Fields:[ ] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:57:14.095+0100 D STORAGE [conn3] WT commit_transaction 2016-12-09T20:57:14.095+0100 D STORAGE [conn3] WT begin_transaction 2016-12-09T20:57:14.095+0100 D STORAGE [conn3] WT rollback_transaction 2016-12-09T20:57:14.095+0100 I COMMAND [conn3] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "rs", dryRun: false, term: 88, candidateIndex: 0, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313330000|2, t: 87 } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:63 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms 2016-12-09T20:57:14.095+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } 2016-12-09T20:57:14.095+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:57:14.095+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:57:14.959+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 10 -- target:localhost:12345 db:admin expDate:2016-12-09T20:57:24.959+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:14.959+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 11 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:24.959+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:14.959+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 10 -- target:localhost:12345 db:admin expDate:2016-12-09T20:57:24.959+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:14.959+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 10 on host localhost:12345 2016-12-09T20:57:14.959+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to localhost:31002 2016-12-09T20:57:14.960+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 10 out: Operation aborted. 2016-12-09T20:57:14.960+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:0, msg: 2016-12-09T20:57:14.960+0100 I REPL [ReplicationExecutor] Member localhost:12345 is now in state PRIMARY 2016-12-09T20:57:14.960+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:12345 at 2016-12-09T19:57:19.960Z 2016-12-09T20:57:14.961+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 12 on host localhost:31002 2016-12-09T20:57:14.961+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:31002 2016-12-09T20:57:14.961+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 11 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:24.959+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:14.962+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 11 on host localhost:31002 2016-12-09T20:57:14.962+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 11 out: Operation aborted. 2016-12-09T20:57:14.962+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:57:14.962+0100 I REPL [ReplicationExecutor] Member localhost:31002 is now in state SECONDARY 2016-12-09T20:57:14.962+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:57:19.962Z 2016-12-09T20:57:15.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:15.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:16.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:16.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:16.101+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } 2016-12-09T20:57:16.101+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:57:16.101+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:382 locks:{} protocol:op_command 0ms 2016-12-09T20:57:17.003+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:17.004+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:17.805+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 88 } 2016-12-09T20:57:17.805+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:57:17.806+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:382 locks:{} protocol:op_command 0ms 2016-12-09T20:57:18.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:18.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:18.105+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } 2016-12-09T20:57:18.105+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:57:18.106+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:382 locks:{} protocol:op_command 0ms 2016-12-09T20:57:19.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:19.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:19.953+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:57:18.106+0100 2016-12-09T20:57:19.953+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:57:17.806+0100 2016-12-09T20:57:19.953+0100 D REPL [ReplicationExecutor] earliest member 2 date: 2016-12-09T20:57:17.806+0100 2016-12-09T20:57:19.953+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:57:27.806+0100 2016-12-09T20:57:19.962+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 15 -- target:localhost:12345 db:admin expDate:2016-12-09T20:57:29.962+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:19.963+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 16 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:29.963+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:19.963+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 15 -- target:localhost:12345 db:admin expDate:2016-12-09T20:57:29.962+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:19.963+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 15 on host localhost:12345 2016-12-09T20:57:19.963+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 16 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:29.963+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:19.963+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 16 on host localhost:31002 2016-12-09T20:57:19.963+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 16 out: Operation aborted. 2016-12-09T20:57:19.963+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 15 out: Operation aborted. 2016-12-09T20:57:19.963+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:57:19.963+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:57:24.963Z 2016-12-09T20:57:19.964+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:0, msg: 2016-12-09T20:57:19.964+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:12345 at 2016-12-09T19:57:24.964Z 2016-12-09T20:57:19.987+0100 I REPL [ReplicationExecutor] syncing from: localhost:31002 2016-12-09T20:57:19.987+0100 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2016-12-09T20:57:19.987+0100 D NETWORK [rsBackgroundSync] connected to server localhost:31002 (127.0.0.1) 2016-12-09T20:57:19.988+0100 D REPL [SyncSourceFeedback] resetting connection in sync source feedback 2016-12-09T20:57:19.988+0100 D STORAGE [rsBackgroundSync] WT begin_transaction 2016-12-09T20:57:19.988+0100 I REPL [SyncSourceFeedback] setting syncSourceFeedback to localhost:31002 2016-12-09T20:57:19.988+0100 D STORAGE [rsBackgroundSync] WT rollback_transaction 2016-12-09T20:57:19.988+0100 D REPL [rsBackgroundSync] setting appliedThrough to: (term: 87, timestamp: Dec 9 20:55:30:2)({ ts: Timestamp 1481313330000|2, t: 87 }) 2016-12-09T20:57:19.988+0100 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2016-12-09T20:57:19.988+0100 D QUERY [rsBackgroundSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:57:19.989+0100 D STORAGE [rsBackgroundSync] WT begin_transaction 2016-12-09T20:57:19.989+0100 D WRITE [rsBackgroundSync] update validate options -- updatedFields: Fields:[ begin,] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:57:19.989+0100 D STORAGE [rsBackgroundSync] WT commit_transaction 2016-12-09T20:57:19.989+0100 D NETWORK [SyncSourceFeedback] connected to server localhost:31002 (127.0.0.1) 2016-12-09T20:57:19.989+0100 D STORAGE [rsBackgroundSync] WT begin_transaction 2016-12-09T20:57:19.989+0100 D STORAGE [rsBackgroundSync] WT rollback_transaction 2016-12-09T20:57:19.989+0100 D REPL [rsBackgroundSync] scheduling fetcher to read remote oplog on localhost:31002 starting at filter: { ts: { $gte: Timestamp 1481313330000|2 } } 2016-12-09T20:57:19.989+0100 D EXECUTOR [rsBackgroundSync] Scheduling remote command request: RemoteCommand 19 -- target:localhost:31002 db:local expDate:2016-12-09T20:57:29.989+0100 cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1481313330000|2 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 88 } 2016-12-09T20:57:19.989+0100 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 19 -- target:localhost:31002 db:local expDate:2016-12-09T20:57:29.989+0100 cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1481313330000|2 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 88 } 2016-12-09T20:57:19.989+0100 I ASIO [NetworkInterfaceASIO-BGSync-0] Connecting to localhost:31002 2016-12-09T20:57:19.989+0100 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313330000|2, t: 87 }, appliedOpTime: { ts: Timestamp 1481313330000|2, t: 87 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313330000|2, t: 87 }, appliedOpTime: { ts: Timestamp 1481313330000|2, t: 87 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, appliedOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:57:19.991+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 20 on host localhost:31002 2016-12-09T20:57:19.991+0100 I ASIO [NetworkInterfaceASIO-BGSync-0] Successfully connected to localhost:31002 2016-12-09T20:57:19.991+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Initiating asynchronous command: RemoteCommand 19 -- target:localhost:31002 db:local expDate:2016-12-09T20:57:29.989+0100 cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1481313330000|2 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 88 } 2016-12-09T20:57:19.991+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 19 on host localhost:31002 2016-12-09T20:57:19.992+0100 D EXECUTOR [NetworkInterfaceASIO-BGSync-0] Received remote response: RemoteResponse -- cmd:{ waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1481313330000|2, t: 87, h: 4187683196111166928, v: 2, op: "n", ns: "", o: { msg: "new primary" } }, { ts: Timestamp 1481313434000|2, t: 88, h: -4523595302769613131, v: 2, op: "n", ns: "", o: { msg: "new primary" } } ], id: 13021766177, ns: "local.oplog.rs" }, ok: 1.0 } 2016-12-09T20:57:19.992+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Failed to time operation 19 out: Operation aborted. 2016-12-09T20:57:19.992+0100 D EXECUTOR [rsBackgroundSync-0] Executing a task on behalf of pool rsBackgroundSync 2016-12-09T20:57:19.992+0100 D REPL [rsBackgroundSync-0] fetcher read 2 operations from remote oplog starting at ts: Timestamp 1481313330000|2 and ending at ts: Timestamp 1481313434000|2 2016-12-09T20:57:19.992+0100 D REPL [rsBackgroundSync-0] batch resetting _lastOpTimeFetched: (term: 88, timestamp: Dec 9 20:57:14:2) 2016-12-09T20:57:19.992+0100 D REPL [rsSync] replication batch size is 1 2016-12-09T20:57:19.992+0100 D STORAGE [rsSync] WT begin_transaction 2016-12-09T20:57:19.992+0100 D STORAGE [rsSync] WT commit_transaction 2016-12-09T20:57:19.992+0100 D REPL [rsSync] setting minvalid to at least: (term: 88, timestamp: Dec 9 20:57:14:2)({ ts: Timestamp 1481313434000|2, t: 88 }) 2016-12-09T20:57:19.992+0100 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:57:19.992+0100 D STORAGE [rsSync] WT begin_transaction 2016-12-09T20:57:19.992+0100 D WRITE [rsSync] update validate options -- updatedFields: Fields:[ t,ts,] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:57:19.992+0100 D STORAGE [rsSync] WT commit_transaction 2016-12-09T20:57:19.992+0100 D STORAGE [rsSync] WT begin_transaction 2016-12-09T20:57:19.992+0100 D STORAGE [rsSync] WT rollback_transaction 2016-12-09T20:57:19.992+0100 D EXECUTOR [repl writer worker 0] Executing a task on behalf of pool repl writer worker Pool 2016-12-09T20:57:19.993+0100 D EXECUTOR [repl writer worker 0] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:19.993+0100 D REPL [rsSync] setting appliedThrough to: (term: 88, timestamp: Dec 9 20:57:14:2)({ ts: Timestamp 1481313434000|2, t: 88 }) 2016-12-09T20:57:19.993+0100 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:57:19.993+0100 D STORAGE [rsSync] WT begin_transaction 2016-12-09T20:57:19.993+0100 D WRITE [rsSync] update validate options -- updatedFields: Fields:[ begin,] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:57:19.993+0100 D STORAGE [rsSync] WT commit_transaction 2016-12-09T20:57:19.993+0100 D STORAGE [rsSync] WT begin_transaction 2016-12-09T20:57:19.993+0100 D STORAGE [rsSync] WT rollback_transaction 2016-12-09T20:57:19.993+0100 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313330000|2, t: 87 }, appliedOpTime: { ts: Timestamp 1481313330000|2, t: 87 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313330000|2, t: 87 }, appliedOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, appliedOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:57:19.995+0100 D EXECUTOR [rsBackgroundSync-0] Scheduling remote command request: RemoteCommand 22 -- target:localhost:31002 db:local expDate:2016-12-09T20:57:29.995+0100 cmd:{ getMore: 13021766177, collection: "oplog.rs", maxTimeMS: 5000, term: 88, lastKnownCommittedOpTime: { ts: Timestamp 1481313434000|2, t: 88 } } 2016-12-09T20:57:19.995+0100 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 22 -- target:localhost:31002 db:local expDate:2016-12-09T20:57:29.995+0100 cmd:{ getMore: 13021766177, collection: "oplog.rs", maxTimeMS: 5000, term: 88, lastKnownCommittedOpTime: { ts: Timestamp 1481313434000|2, t: 88 } } 2016-12-09T20:57:19.995+0100 D EXECUTOR [rsBackgroundSync-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2016-12-09T20:57:19.995+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Initiating asynchronous command: RemoteCommand 22 -- target:localhost:31002 db:local expDate:2016-12-09T20:57:29.995+0100 cmd:{ getMore: 13021766177, collection: "oplog.rs", maxTimeMS: 5000, term: 88, lastKnownCommittedOpTime: { ts: Timestamp 1481313434000|2, t: 88 } } 2016-12-09T20:57:19.995+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 22 on host localhost:31002 2016-12-09T20:57:19.997+0100 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313330000|2, t: 87 }, appliedOpTime: { ts: Timestamp 1481313330000|2, t: 87 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, appliedOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, appliedOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:57:20.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:20.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:20.109+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } 2016-12-09T20:57:20.109+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:57:20.109+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:57:21.003+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:21.004+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:22.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:22.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:22.114+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } 2016-12-09T20:57:22.114+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:57:22.114+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:57:22.808+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 88 } 2016-12-09T20:57:22.808+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:57:22.809+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:57:23.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:23.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:24.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:24.007+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:24.120+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } 2016-12-09T20:57:24.120+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:57:24.120+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:57:24.813+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 88 } 2016-12-09T20:57:24.813+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:57:24.813+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:57:24.964+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 23 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:34.964+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:24.964+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 24 -- target:localhost:12345 db:admin expDate:2016-12-09T20:57:34.964+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:24.964+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 23 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:34.964+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:24.964+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 23 on host localhost:31002 2016-12-09T20:57:24.964+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 24 -- target:localhost:12345 db:admin expDate:2016-12-09T20:57:34.964+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:24.964+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 24 on host localhost:12345 2016-12-09T20:57:24.965+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 23 out: Operation aborted. 2016-12-09T20:57:24.965+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:57:24.965+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:57:26.965Z 2016-12-09T20:57:24.965+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 24 out: Operation aborted. 2016-12-09T20:57:24.965+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:0, msg: 2016-12-09T20:57:24.965+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:12345 at 2016-12-09T19:57:26.965Z 2016-12-09T20:57:25.001+0100 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313330000|2, t: 87 }, appliedOpTime: { ts: Timestamp 1481313330000|2, t: 87 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, appliedOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, appliedOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:57:25.001+0100 D EXECUTOR [NetworkInterfaceASIO-BGSync-0] Received remote response: RemoteResponse -- cmd:{ cursor: { nextBatch: [], id: 13021766177, ns: "local.oplog.rs" }, ok: 1.0 } 2016-12-09T20:57:25.001+0100 D EXECUTOR [rsBackgroundSync-0] Executing a task on behalf of pool rsBackgroundSync 2016-12-09T20:57:25.001+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Failed to time operation 22 out: Operation aborted. 2016-12-09T20:57:25.001+0100 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog 2016-12-09T20:57:25.002+0100 D EXECUTOR [rsBackgroundSync-0] Scheduling remote command request: RemoteCommand 28 -- target:localhost:31002 db:local expDate:2016-12-09T20:57:35.001+0100 cmd:{ getMore: 13021766177, collection: "oplog.rs", maxTimeMS: 5000, term: 88, lastKnownCommittedOpTime: { ts: Timestamp 1481313434000|2, t: 88 } } 2016-12-09T20:57:25.002+0100 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 28 -- target:localhost:31002 db:local expDate:2016-12-09T20:57:35.001+0100 cmd:{ getMore: 13021766177, collection: "oplog.rs", maxTimeMS: 5000, term: 88, lastKnownCommittedOpTime: { ts: Timestamp 1481313434000|2, t: 88 } } 2016-12-09T20:57:25.002+0100 D EXECUTOR [rsBackgroundSync-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2016-12-09T20:57:25.002+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:25.002+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Initiating asynchronous command: RemoteCommand 28 -- target:localhost:31002 db:local expDate:2016-12-09T20:57:35.001+0100 cmd:{ getMore: 13021766177, collection: "oplog.rs", maxTimeMS: 5000, term: 88, lastKnownCommittedOpTime: { ts: Timestamp 1481313434000|2, t: 88 } } 2016-12-09T20:57:25.002+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 28 on host localhost:31002 2016-12-09T20:57:25.003+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:26.001+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:26.002+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:26.125+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } 2016-12-09T20:57:26.125+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:57:26.125+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:57:26.816+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 88 } 2016-12-09T20:57:26.816+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:57:26.816+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:57:26.965+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 29 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:36.965+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:26.965+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 30 -- target:localhost:12345 db:admin expDate:2016-12-09T20:57:36.965+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:26.965+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 29 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:36.965+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:26.965+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 29 on host localhost:31002 2016-12-09T20:57:26.966+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 30 -- target:localhost:12345 db:admin expDate:2016-12-09T20:57:36.965+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:26.966+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 30 on host localhost:12345 2016-12-09T20:57:26.966+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 29 out: Operation aborted. 2016-12-09T20:57:26.966+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:57:26.966+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:57:28.966Z 2016-12-09T20:57:26.966+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 30 out: Operation aborted. 2016-12-09T20:57:26.966+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:0, msg: 2016-12-09T20:57:26.966+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:12345 at 2016-12-09T19:57:28.966Z 2016-12-09T20:57:27.003+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:27.004+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:27.811+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:57:26.125+0100 2016-12-09T20:57:27.811+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:57:26.816+0100 2016-12-09T20:57:27.811+0100 D REPL [ReplicationExecutor] earliest member 0 date: 2016-12-09T20:57:26.125+0100 2016-12-09T20:57:27.811+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:57:36.125+0100 2016-12-09T20:57:28.003+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:28.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:28.131+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } 2016-12-09T20:57:28.131+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:57:28.131+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:57:28.822+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 88 } 2016-12-09T20:57:28.822+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:57:28.822+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:57:28.972+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 33 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:38.971+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:28.972+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 34 -- target:localhost:12345 db:admin expDate:2016-12-09T20:57:38.972+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:28.972+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 33 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:38.971+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:28.972+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 33 on host localhost:31002 2016-12-09T20:57:28.972+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 34 -- target:localhost:12345 db:admin expDate:2016-12-09T20:57:38.972+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:28.972+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 34 on host localhost:12345 2016-12-09T20:57:28.972+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 33 out: Operation aborted. 2016-12-09T20:57:28.973+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:57:28.973+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:57:30.972Z 2016-12-09T20:57:28.973+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 34 out: Operation aborted. 2016-12-09T20:57:28.973+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:0, msg: 2016-12-09T20:57:28.973+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:12345 at 2016-12-09T19:57:30.973Z 2016-12-09T20:57:29.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:29.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:30.002+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:30.003+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:30.007+0100 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313330000|2, t: 87 }, appliedOpTime: { ts: Timestamp 1481313330000|2, t: 87 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, appliedOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, appliedOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:57:30.007+0100 D EXECUTOR [NetworkInterfaceASIO-BGSync-0] Received remote response: RemoteResponse -- cmd:{ cursor: { nextBatch: [], id: 13021766177, ns: "local.oplog.rs" }, ok: 1.0 } 2016-12-09T20:57:30.007+0100 D EXECUTOR [rsBackgroundSync-0] Executing a task on behalf of pool rsBackgroundSync 2016-12-09T20:57:30.007+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Failed to time operation 28 out: Operation aborted. 2016-12-09T20:57:30.007+0100 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog 2016-12-09T20:57:30.007+0100 D EXECUTOR [rsBackgroundSync-0] Scheduling remote command request: RemoteCommand 38 -- target:localhost:31002 db:local expDate:2016-12-09T20:57:40.007+0100 cmd:{ getMore: 13021766177, collection: "oplog.rs", maxTimeMS: 5000, term: 88, lastKnownCommittedOpTime: { ts: Timestamp 1481313434000|2, t: 88 } } 2016-12-09T20:57:30.007+0100 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 38 -- target:localhost:31002 db:local expDate:2016-12-09T20:57:40.007+0100 cmd:{ getMore: 13021766177, collection: "oplog.rs", maxTimeMS: 5000, term: 88, lastKnownCommittedOpTime: { ts: Timestamp 1481313434000|2, t: 88 } } 2016-12-09T20:57:30.007+0100 D EXECUTOR [rsBackgroundSync-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2016-12-09T20:57:30.007+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Initiating asynchronous command: RemoteCommand 38 -- target:localhost:31002 db:local expDate:2016-12-09T20:57:40.007+0100 cmd:{ getMore: 13021766177, collection: "oplog.rs", maxTimeMS: 5000, term: 88, lastKnownCommittedOpTime: { ts: Timestamp 1481313434000|2, t: 88 } } 2016-12-09T20:57:30.007+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 38 on host localhost:31002 2016-12-09T20:57:30.136+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } 2016-12-09T20:57:30.136+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:57:30.136+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:57:30.825+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 88 } 2016-12-09T20:57:30.825+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:57:30.825+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:57:30.977+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 39 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:40.977+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:30.977+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 40 -- target:localhost:12345 db:admin expDate:2016-12-09T20:57:40.977+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:30.977+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 39 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:40.977+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:30.977+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 39 on host localhost:31002 2016-12-09T20:57:30.977+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 40 -- target:localhost:12345 db:admin expDate:2016-12-09T20:57:40.977+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:30.978+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 40 on host localhost:12345 2016-12-09T20:57:30.978+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 39 out: Operation aborted. 2016-12-09T20:57:30.978+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:57:30.978+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:57:32.978Z 2016-12-09T20:57:31.003+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:31.004+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:32.001+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:32.002+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:32.141+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } 2016-12-09T20:57:32.141+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:57:32.141+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:57:32.830+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 88 } 2016-12-09T20:57:32.830+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:57:32.830+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:57:32.979+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 42 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:42.979+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:32.979+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 42 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:42.979+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:32.979+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 42 on host localhost:31002 2016-12-09T20:57:32.980+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 42 out: Operation aborted. 2016-12-09T20:57:32.980+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:57:32.980+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:57:34.980Z 2016-12-09T20:57:33.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:33.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:34.001+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:34.002+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:34.148+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } 2016-12-09T20:57:34.148+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:57:34.148+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:57:34.836+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 88 } 2016-12-09T20:57:34.836+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:57:34.836+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:57:34.981+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 44 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:44.981+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:34.981+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 44 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:44.981+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:34.981+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 44 on host localhost:31002 2016-12-09T20:57:34.982+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 44 out: Operation aborted. 2016-12-09T20:57:34.982+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:57:34.982+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:57:36.982Z 2016-12-09T20:57:35.002+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:35.003+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:35.012+0100 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313330000|2, t: 87 }, appliedOpTime: { ts: Timestamp 1481313330000|2, t: 87 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, appliedOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, appliedOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:57:35.012+0100 D EXECUTOR [NetworkInterfaceASIO-BGSync-0] Received remote response: RemoteResponse -- cmd:{ cursor: { nextBatch: [], id: 13021766177, ns: "local.oplog.rs" }, ok: 1.0 } 2016-12-09T20:57:35.012+0100 D EXECUTOR [rsBackgroundSync-0] Executing a task on behalf of pool rsBackgroundSync 2016-12-09T20:57:35.012+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Failed to time operation 38 out: Operation aborted. 2016-12-09T20:57:35.012+0100 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog 2016-12-09T20:57:35.012+0100 D EXECUTOR [rsBackgroundSync-0] Scheduling remote command request: RemoteCommand 47 -- target:localhost:31002 db:local expDate:2016-12-09T20:57:45.012+0100 cmd:{ getMore: 13021766177, collection: "oplog.rs", maxTimeMS: 5000, term: 88, lastKnownCommittedOpTime: { ts: Timestamp 1481313434000|2, t: 88 } } 2016-12-09T20:57:35.012+0100 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 47 -- target:localhost:31002 db:local expDate:2016-12-09T20:57:45.012+0100 cmd:{ getMore: 13021766177, collection: "oplog.rs", maxTimeMS: 5000, term: 88, lastKnownCommittedOpTime: { ts: Timestamp 1481313434000|2, t: 88 } } 2016-12-09T20:57:35.012+0100 D EXECUTOR [rsBackgroundSync-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2016-12-09T20:57:35.013+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Initiating asynchronous command: RemoteCommand 47 -- target:localhost:31002 db:local expDate:2016-12-09T20:57:45.012+0100 cmd:{ getMore: 13021766177, collection: "oplog.rs", maxTimeMS: 5000, term: 88, lastKnownCommittedOpTime: { ts: Timestamp 1481313434000|2, t: 88 } } 2016-12-09T20:57:35.013+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 47 on host localhost:31002 2016-12-09T20:57:36.006+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:36.007+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:36.126+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:57:34.148+0100 2016-12-09T20:57:36.126+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:57:34.836+0100 2016-12-09T20:57:36.126+0100 D REPL [ReplicationExecutor] earliest member 0 date: 2016-12-09T20:57:34.148+0100 2016-12-09T20:57:36.126+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:57:44.148+0100 2016-12-09T20:57:36.148+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } 2016-12-09T20:57:36.148+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:57:36.148+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:57:36.841+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 88 } 2016-12-09T20:57:36.841+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:57:36.841+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:57:36.987+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 48 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:46.987+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:36.987+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 48 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:46.987+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:36.987+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 48 on host localhost:31002 2016-12-09T20:57:36.988+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 48 out: Operation aborted. 2016-12-09T20:57:36.988+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:57:36.988+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:57:38.988Z 2016-12-09T20:57:37.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:37.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:37.845+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 88 } 2016-12-09T20:57:37.845+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:57:37.845+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:57:38.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:38.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:38.153+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } 2016-12-09T20:57:38.153+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:57:38.154+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:57:38.990+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 50 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:48.990+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:38.990+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 50 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:48.990+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:38.990+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 50 on host localhost:31002 2016-12-09T20:57:38.991+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 50 out: Operation aborted. 2016-12-09T20:57:38.991+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:57:38.991+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:57:40.991Z 2016-12-09T20:57:39.002+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:39.004+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:39.095+0100 D COMMAND [conn2] run command admin.$cmd { replSetRequestVotes: 1, setName: "rs", dryRun: true, term: 88, candidateIndex: 2, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313434000|2, t: 88 } } 2016-12-09T20:57:39.095+0100 D COMMAND [conn2] command: replSetRequestVotes 2016-12-09T20:57:39.095+0100 D QUERY [conn2] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:57:39.095+0100 D STORAGE [conn2] WT begin_transaction 2016-12-09T20:57:39.095+0100 D WRITE [conn2] update validate options -- updatedFields: Fields:[ ] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:57:39.095+0100 D STORAGE [conn2] WT commit_transaction 2016-12-09T20:57:39.095+0100 D STORAGE [conn2] WT begin_transaction 2016-12-09T20:57:39.095+0100 D STORAGE [conn2] WT rollback_transaction 2016-12-09T20:57:39.095+0100 I COMMAND [conn2] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "rs", dryRun: true, term: 88, candidateIndex: 2, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313434000|2, t: 88 } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:63 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms 2016-12-09T20:57:39.096+0100 D COMMAND [conn2] run command admin.$cmd { replSetRequestVotes: 1, setName: "rs", dryRun: false, term: 89, candidateIndex: 2, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313434000|2, t: 88 } } 2016-12-09T20:57:39.096+0100 D COMMAND [conn2] command: replSetRequestVotes 2016-12-09T20:57:39.097+0100 D QUERY [conn2] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:57:39.097+0100 D STORAGE [conn2] WT begin_transaction 2016-12-09T20:57:39.097+0100 D WRITE [conn2] update validate options -- updatedFields: Fields:[ ] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:57:39.097+0100 D STORAGE [conn2] WT commit_transaction 2016-12-09T20:57:39.097+0100 D STORAGE [conn2] WT begin_transaction 2016-12-09T20:57:39.097+0100 D STORAGE [conn2] WT rollback_transaction 2016-12-09T20:57:39.097+0100 I COMMAND [conn2] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "rs", dryRun: false, term: 89, candidateIndex: 2, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313434000|2, t: 88 } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:63 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms 2016-12-09T20:57:39.098+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 89 } 2016-12-09T20:57:39.098+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:57:39.098+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 89 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:57:39.855+0100 D EXECUTOR [NetworkInterfaceASIO-BGSync-0] Received remote response: RemoteResponse -- cmd:{ cursor: { nextBatch: [ { ts: Timestamp 1481313459000|2, t: 89, h: -2465938348524082941, v: 2, op: "n", ns: "", o: { msg: "new primary" } } ], id: 13021766177, ns: "local.oplog.rs" }, ok: 1.0 } 2016-12-09T20:57:39.855+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Failed to time operation 47 out: Operation aborted. 2016-12-09T20:57:39.855+0100 D EXECUTOR [rsBackgroundSync-0] Executing a task on behalf of pool rsBackgroundSync 2016-12-09T20:57:39.855+0100 D REPL [rsBackgroundSync-0] fetcher read 1 operations from remote oplog starting at ts: Timestamp 1481313459000|2 and ending at ts: Timestamp 1481313459000|2 2016-12-09T20:57:39.855+0100 D REPL [rsBackgroundSync-0] batch resetting _lastOpTimeFetched: (term: 89, timestamp: Dec 9 20:57:39:2) 2016-12-09T20:57:39.855+0100 D REPL [rsSync] replication batch size is 1 2016-12-09T20:57:39.855+0100 D STORAGE [rsSync] WT begin_transaction 2016-12-09T20:57:39.855+0100 D STORAGE [rsSync] WT commit_transaction 2016-12-09T20:57:39.855+0100 D REPL [rsSync] setting minvalid to at least: (term: 89, timestamp: Dec 9 20:57:39:2)({ ts: Timestamp 1481313459000|2, t: 89 }) 2016-12-09T20:57:39.855+0100 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:57:39.855+0100 D STORAGE [rsSync] WT begin_transaction 2016-12-09T20:57:39.855+0100 D WRITE [rsSync] update validate options -- updatedFields: Fields:[ t,ts,] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:57:39.855+0100 D STORAGE [rsSync] WT commit_transaction 2016-12-09T20:57:39.855+0100 D STORAGE [rsSync] WT begin_transaction 2016-12-09T20:57:39.856+0100 D STORAGE [rsSync] WT rollback_transaction 2016-12-09T20:57:39.856+0100 D EXECUTOR [repl writer worker 1] Executing a task on behalf of pool repl writer worker Pool 2016-12-09T20:57:39.856+0100 D EXECUTOR [repl writer worker 1] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:57:39.856+0100 D REPL [rsSync] setting appliedThrough to: (term: 89, timestamp: Dec 9 20:57:39:2)({ ts: Timestamp 1481313459000|2, t: 89 }) 2016-12-09T20:57:39.856+0100 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:57:39.856+0100 D STORAGE [rsSync] WT begin_transaction 2016-12-09T20:57:39.856+0100 D WRITE [rsSync] update validate options -- updatedFields: Fields:[ begin,] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:57:39.856+0100 D STORAGE [rsSync] WT commit_transaction 2016-12-09T20:57:39.856+0100 D STORAGE [rsSync] WT begin_transaction 2016-12-09T20:57:39.856+0100 D STORAGE [rsSync] WT rollback_transaction 2016-12-09T20:57:39.856+0100 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313330000|2, t: 87 }, appliedOpTime: { ts: Timestamp 1481313330000|2, t: 87 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, appliedOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, appliedOpTime: { ts: Timestamp 1481313434000|2, t: 88 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:57:39.858+0100 D EXECUTOR [rsBackgroundSync-0] Scheduling remote command request: RemoteCommand 53 -- target:localhost:31002 db:local cmd:{ killCursors: "oplog.rs", cursors: [ 13021766177 ] } 2016-12-09T20:57:39.858+0100 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 53 -- target:localhost:31002 db:local cmd:{ killCursors: "oplog.rs", cursors: [ 13021766177 ] } 2016-12-09T20:57:39.858+0100 D EXECUTOR [rsBackgroundSync-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2016-12-09T20:57:39.858+0100 D REPL [rsBackgroundSync] fetcher stopped reading remote oplog on localhost:31002 2016-12-09T20:57:39.858+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Initiating asynchronous command: RemoteCommand 53 -- target:localhost:31002 db:local cmd:{ killCursors: "oplog.rs", cursors: [ 13021766177 ] } 2016-12-09T20:57:39.858+0100 I REPL [ReplicationExecutor] could not find member to sync from 2016-12-09T20:57:39.858+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 53 on host localhost:31002 2016-12-09T20:57:39.858+0100 D ASIO [ReplicationExecutor] Canceling operation; original request was: RemoteCommand 40 -- target:localhost:12345 db:admin expDate:2016-12-09T20:57:40.977+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } 2016-12-09T20:57:39.858+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:12345 at 2016-12-09T19:57:39.858Z 2016-12-09T20:57:39.858+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:57:39.858Z 2016-12-09T20:57:39.858+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:57:39.858+0100 2016-12-09T20:57:39.858+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:57:39.858+0100 2016-12-09T20:57:39.858+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 40 -- target:localhost:12345 db:admin expDate:2016-12-09T20:57:40.977+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 88 } reason: CallbackCanceled: Callback canceled 2016-12-09T20:57:39.858+0100 D REPL [ReplicationExecutor] earliest member 0 date: 2016-12-09T20:57:39.858+0100 2016-12-09T20:57:39.858+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:57:49.858+0100 2016-12-09T20:57:39.858+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 40 out: Operation aborted. 2016-12-09T20:57:39.858+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 55 -- target:localhost:12345 db:admin expDate:2016-12-09T20:57:40.977+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 89 } 2016-12-09T20:57:39.858+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 57 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:49.858+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 89 } 2016-12-09T20:57:39.858+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to localhost:12345 2016-12-09T20:57:39.859+0100 D EXECUTOR [NetworkInterfaceASIO-BGSync-0] Received remote response: RemoteResponse -- cmd:{ cursorsKilled: [ 13021766177 ], cursorsNotFound: [], cursorsAlive: [], cursorsUnknown: [], ok: 1.0 } 2016-12-09T20:57:39.859+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 57 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:49.858+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 89 } 2016-12-09T20:57:39.859+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 57 on host localhost:31002 2016-12-09T20:57:39.859+0100 D EXECUTOR [rsBackgroundSync-0] Executing a task on behalf of pool rsBackgroundSync 2016-12-09T20:57:39.859+0100 D EXECUTOR [rsBackgroundSync-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2016-12-09T20:57:39.859+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 57 out: Operation aborted. 2016-12-09T20:57:39.860+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:57:39.860+0100 I REPL [ReplicationExecutor] Member localhost:31002 is now in state PRIMARY 2016-12-09T20:57:39.860+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:57:44.860Z 2016-12-09T20:57:39.860+0100 D REPL [SyncSourceFeedback] resetting connection in sync source feedback 2016-12-09T20:57:39.860+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 56 on host localhost:12345 2016-12-09T20:57:40.001+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:40.002+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:40.157+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } 2016-12-09T20:57:40.157+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:57:40.157+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 88 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:382 locks:{} protocol:op_command 0ms 2016-12-09T20:57:40.980+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to get connection from pool for request 55: ExceededTimeLimit: Couldn't get a connection within the time limit 2016-12-09T20:57:40.980+0100 I REPL [ReplicationExecutor] Error in heartbeat request to localhost:12345; ExceededTimeLimit: Couldn't get a connection within the time limit 2016-12-09T20:57:40.980+0100 D REPL [ReplicationExecutor] setDownValues: heartbeat response failed for member _id:0, msg: Couldn't get a connection within the time limit 2016-12-09T20:57:40.980+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:12345 at 2016-12-09T19:57:45.980Z 2016-12-09T20:57:41.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:41.007+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:41.103+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 89 } 2016-12-09T20:57:41.103+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:57:41.103+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 89 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:382 locks:{} protocol:op_command 0ms 2016-12-09T20:57:42.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:42.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:43.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:43.007+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:43.106+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 89 } 2016-12-09T20:57:43.106+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:57:43.106+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 89 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:382 locks:{} protocol:op_command 0ms 2016-12-09T20:57:44.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:44.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:44.861+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 60 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:54.861+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 89 } 2016-12-09T20:57:44.861+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 60 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:54.861+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 89 } 2016-12-09T20:57:44.861+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 60 on host localhost:31002 2016-12-09T20:57:44.862+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 60 out: Operation aborted. 2016-12-09T20:57:44.862+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:57:44.862+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:57:49.862Z 2016-12-09T20:57:45.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:45.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:45.110+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 89 } 2016-12-09T20:57:45.111+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:57:45.111+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 89 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:382 locks:{} protocol:op_command 0ms 2016-12-09T20:57:45.162+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 89 } 2016-12-09T20:57:45.162+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:57:45.162+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 89 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:382 locks:{} protocol:op_command 0ms 2016-12-09T20:57:45.980+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 62 -- target:localhost:12345 db:admin expDate:2016-12-09T20:57:55.980+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 89 } 2016-12-09T20:57:46.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:46.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:47.002+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:47.003+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:47.113+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 89 } 2016-12-09T20:57:47.113+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:57:47.113+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 89 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:382 locks:{} protocol:op_command 0ms 2016-12-09T20:57:47.167+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 89 } 2016-12-09T20:57:47.167+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:57:47.168+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 89 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:382 locks:{} protocol:op_command 0ms 2016-12-09T20:57:48.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:48.007+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:49.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:49.007+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:49.118+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 89 } 2016-12-09T20:57:49.118+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:57:49.118+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 89 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:382 locks:{} protocol:op_command 0ms 2016-12-09T20:57:49.173+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 89 } 2016-12-09T20:57:49.173+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:57:49.173+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 89 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:382 locks:{} protocol:op_command 0ms 2016-12-09T20:57:49.863+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:57:49.173+0100 2016-12-09T20:57:49.863+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:57:49.118+0100 2016-12-09T20:57:49.863+0100 D REPL [ReplicationExecutor] earliest member 2 date: 2016-12-09T20:57:49.118+0100 2016-12-09T20:57:49.863+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:57:59.118+0100 2016-12-09T20:57:49.864+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 63 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:59.863+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 89 } 2016-12-09T20:57:49.864+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 63 -- target:localhost:31002 db:admin expDate:2016-12-09T20:57:59.863+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 89 } 2016-12-09T20:57:49.864+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 63 on host localhost:31002 2016-12-09T20:57:49.864+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 63 out: Operation aborted. 2016-12-09T20:57:49.864+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:57:49.864+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:57:54.864Z 2016-12-09T20:57:50.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:50.007+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:50.158+0100 D COMMAND [conn3] run command admin.$cmd { replSetRequestVotes: 1, setName: "rs", dryRun: true, term: 89, candidateIndex: 0, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313459000|2, t: 89 } } 2016-12-09T20:57:50.158+0100 D COMMAND [conn3] command: replSetRequestVotes 2016-12-09T20:57:50.158+0100 D QUERY [conn3] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:57:50.158+0100 D STORAGE [conn3] WT begin_transaction 2016-12-09T20:57:50.158+0100 D WRITE [conn3] update validate options -- updatedFields: Fields:[ ] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:57:50.158+0100 D STORAGE [conn3] WT commit_transaction 2016-12-09T20:57:50.158+0100 D STORAGE [conn3] WT begin_transaction 2016-12-09T20:57:50.158+0100 D STORAGE [conn3] WT rollback_transaction 2016-12-09T20:57:50.158+0100 I COMMAND [conn3] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "rs", dryRun: true, term: 89, candidateIndex: 0, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313459000|2, t: 89 } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:63 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms 2016-12-09T20:57:50.159+0100 D COMMAND [conn3] run command admin.$cmd { replSetRequestVotes: 1, setName: "rs", dryRun: false, term: 90, candidateIndex: 0, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313459000|2, t: 89 } } 2016-12-09T20:57:50.159+0100 D COMMAND [conn3] command: replSetRequestVotes 2016-12-09T20:57:50.159+0100 D QUERY [conn3] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:57:50.159+0100 D STORAGE [conn3] WT begin_transaction 2016-12-09T20:57:50.159+0100 D WRITE [conn3] update validate options -- updatedFields: Fields:[ ] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:57:50.159+0100 D STORAGE [conn3] WT commit_transaction 2016-12-09T20:57:50.159+0100 D STORAGE [conn3] WT begin_transaction 2016-12-09T20:57:50.159+0100 D STORAGE [conn3] WT rollback_transaction 2016-12-09T20:57:50.159+0100 I COMMAND [conn3] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "rs", dryRun: false, term: 90, candidateIndex: 0, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313459000|2, t: 89 } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:63 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms 2016-12-09T20:57:50.160+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 90 } 2016-12-09T20:57:50.160+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:57:50.160+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 90 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:382 locks:{} protocol:op_command 0ms 2016-12-09T20:57:51.001+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:51.002+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:51.123+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 89 } 2016-12-09T20:57:51.123+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:57:51.123+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 89 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:382 locks:{} protocol:op_command 0ms 2016-12-09T20:57:52.003+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:52.004+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:52.165+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 90 } 2016-12-09T20:57:52.165+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:57:52.166+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 90 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:382 locks:{} protocol:op_command 0ms 2016-12-09T20:57:53.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:53.007+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:54.006+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:54.007+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:54.166+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 90 } 2016-12-09T20:57:54.167+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:57:54.167+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 90 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:382 locks:{} protocol:op_command 0ms 2016-12-09T20:57:54.865+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 65 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:04.865+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 90 } 2016-12-09T20:57:54.865+0100 I ASIO [ReplicationExecutor] dropping unhealthy pooled connection to localhost:31002 2016-12-09T20:57:54.865+0100 I ASIO [ReplicationExecutor] after drop, pool was empty, going to spawn some connections 2016-12-09T20:57:54.865+0100 I ASIO [ReplicationExecutor] Failed to close stream: Socket is not connected 2016-12-09T20:57:54.865+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to localhost:31002 2016-12-09T20:57:54.866+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 66 on host localhost:31002 2016-12-09T20:57:54.867+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:31002 2016-12-09T20:57:54.867+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 65 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:04.865+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 90 } 2016-12-09T20:57:54.867+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 65 on host localhost:31002 2016-12-09T20:57:54.867+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 65 out: Operation aborted. 2016-12-09T20:57:54.867+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:57:54.867+0100 I REPL [ReplicationExecutor] Member localhost:31002 is now in state SECONDARY 2016-12-09T20:57:54.867+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:57:59.867Z 2016-12-09T20:57:55.003+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:55.004+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:55.980+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to get connection from pool for request 62: ExceededTimeLimit: Couldn't get a connection within the time limit 2016-12-09T20:57:55.980+0100 I REPL [ReplicationExecutor] Error in heartbeat request to localhost:12345; ExceededTimeLimit: Couldn't get a connection within the time limit 2016-12-09T20:57:55.980+0100 D REPL [ReplicationExecutor] setDownValues: heartbeat response failed for member _id:0, msg: Couldn't get a connection within the time limit 2016-12-09T20:57:55.980+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:12345 at 2016-12-09T19:58:00.980Z 2016-12-09T20:57:56.000+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:56.002+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:56.125+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 90 } 2016-12-09T20:57:56.125+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:57:56.125+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 90 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:57:56.174+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 90 } 2016-12-09T20:57:56.174+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:57:56.174+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 90 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:57:57.002+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:57.002+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:58.003+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:58.004+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:58.179+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 90 } 2016-12-09T20:57:58.179+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:57:58.179+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 90 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:57:59.006+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:57:59.007+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:57:59.122+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:57:58.179+0100 2016-12-09T20:57:59.122+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:57:56.125+0100 2016-12-09T20:57:59.122+0100 D REPL [ReplicationExecutor] earliest member 2 date: 2016-12-09T20:57:56.125+0100 2016-12-09T20:57:59.122+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:58:06.125+0100 2016-12-09T20:57:59.859+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Failed to connect to localhost:12345 - ExceededTimeLimit: Operation timed out 2016-12-09T20:57:59.859+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 56 -- target:localhost:12345 db:admin cmd:{ isMaster: 1 } reason: ExceededTimeLimit: Operation timed out 2016-12-09T20:57:59.872+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 68 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:09.872+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 90 } 2016-12-09T20:57:59.872+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 68 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:09.872+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 90 } 2016-12-09T20:57:59.872+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 68 on host localhost:31002 2016-12-09T20:57:59.873+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 68 out: Operation aborted. 2016-12-09T20:57:59.873+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:57:59.873+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:04.873Z 2016-12-09T20:58:00.001+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:00.003+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:00.185+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 90 } 2016-12-09T20:58:00.185+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:58:00.185+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 90 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:58:00.616+0100 I REPL [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms 2016-12-09T20:58:00.616+0100 I REPL [ReplicationExecutor] conducting a dry run election to see if we could be elected 2016-12-09T20:58:00.616+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 70 -- target:localhost:12345 db:admin expDate:2016-12-09T20:58:10.616+0100 cmd:{ replSetRequestVotes: 1, setName: "rs", dryRun: true, term: 90, candidateIndex: 1, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313459000|2, t: 89 } } 2016-12-09T20:58:00.616+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 71 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:10.616+0100 cmd:{ replSetRequestVotes: 1, setName: "rs", dryRun: true, term: 90, candidateIndex: 1, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313459000|2, t: 89 } } 2016-12-09T20:58:00.616+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to localhost:12345 2016-12-09T20:58:00.617+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 71 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:10.616+0100 cmd:{ replSetRequestVotes: 1, setName: "rs", dryRun: true, term: 90, candidateIndex: 1, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313459000|2, t: 89 } } 2016-12-09T20:58:00.617+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 71 on host localhost:31002 2016-12-09T20:58:00.617+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 71 out: Operation aborted. 2016-12-09T20:58:00.617+0100 D REPL [ReplicationExecutor] VoteRequester: Got yes vote from localhost:31002, resp:{ term: 90, voteGranted: true, reason: "", ok: 1.0 } 2016-12-09T20:58:00.618+0100 I REPL [ReplicationExecutor] dry election run succeeded, running for election 2016-12-09T20:58:00.618+0100 D EXECUTOR [replExecDBWorker-0] Executing a task on behalf of pool replExecDBWorker-Pool 2016-12-09T20:58:00.618+0100 D QUERY [replExecDBWorker-0] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:58:00.618+0100 D STORAGE [replExecDBWorker-0] WT begin_transaction 2016-12-09T20:58:00.618+0100 D WRITE [replExecDBWorker-0] update validate options -- updatedFields: Fields:[ ] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:58:00.618+0100 D STORAGE [replExecDBWorker-0] WT commit_transaction 2016-12-09T20:58:00.618+0100 D STORAGE [replExecDBWorker-0] WT begin_transaction 2016-12-09T20:58:00.618+0100 D STORAGE [replExecDBWorker-0] WT rollback_transaction 2016-12-09T20:58:00.618+0100 D EXECUTOR [replExecDBWorker-0] waiting for work; I am one of 3 thread(s); the minimum number of threads is 3 2016-12-09T20:58:00.618+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 74 -- target:localhost:12345 db:admin expDate:2016-12-09T20:58:10.618+0100 cmd:{ replSetRequestVotes: 1, setName: "rs", dryRun: false, term: 91, candidateIndex: 1, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313459000|2, t: 89 } } 2016-12-09T20:58:00.618+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 75 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:10.618+0100 cmd:{ replSetRequestVotes: 1, setName: "rs", dryRun: false, term: 91, candidateIndex: 1, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313459000|2, t: 89 } } 2016-12-09T20:58:00.618+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to localhost:12345 2016-12-09T20:58:00.618+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 75 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:10.618+0100 cmd:{ replSetRequestVotes: 1, setName: "rs", dryRun: false, term: 91, candidateIndex: 1, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313459000|2, t: 89 } } 2016-12-09T20:58:00.618+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 75 on host localhost:31002 2016-12-09T20:58:00.618+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 72 on host localhost:12345 2016-12-09T20:58:00.619+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 75 out: Operation aborted. 2016-12-09T20:58:00.619+0100 D REPL [ReplicationExecutor] VoteRequester: Got yes vote from localhost:31002, resp:{ term: 91, voteGranted: true, reason: "", ok: 1.0 } 2016-12-09T20:58:00.619+0100 I REPL [ReplicationExecutor] election succeeded, assuming primary role in term 91 2016-12-09T20:58:00.619+0100 I REPL [ReplicationExecutor] transition to PRIMARY 2016-12-09T20:58:00.619+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:12345 at 2016-12-09T19:58:00.619Z 2016-12-09T20:58:00.619+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:00.619Z 2016-12-09T20:58:00.619+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:00.619+0100 2016-12-09T20:58:00.619+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:00.619+0100 2016-12-09T20:58:00.619+0100 D REPL [ReplicationExecutor] earliest member 0 date: 2016-12-09T20:58:00.619+0100 2016-12-09T20:58:00.619+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:58:10.619+0100 2016-12-09T20:58:00.619+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 78 -- target:localhost:12345 db:admin expDate:2016-12-09T20:58:10.619+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 91 } 2016-12-09T20:58:00.619+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 80 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:10.619+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 91 } 2016-12-09T20:58:00.619+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to localhost:12345 2016-12-09T20:58:00.620+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 80 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:10.619+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 91 } 2016-12-09T20:58:00.620+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 80 on host localhost:31002 2016-12-09T20:58:00.620+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:00.620+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:02.620Z 2016-12-09T20:58:00.620+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 80 out: Operation aborted. 2016-12-09T20:58:00.620+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 76 on host localhost:12345 2016-12-09T20:58:00.621+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 79 on host localhost:12345 2016-12-09T20:58:00.905+0100 D STORAGE [rsSync] WT begin_transaction 2016-12-09T20:58:00.906+0100 D REPL [rsSync] returning oplog delete from point: 0:0 2016-12-09T20:58:00.906+0100 D REPL [rsSync] setting appliedThrough to: (term: -1, timestamp: Jan 1 01:00:00:0)({ ts: Timestamp 0|0, t: -1 }) 2016-12-09T20:58:00.906+0100 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:58:00.906+0100 D WRITE [rsSync] update validate options -- updatedFields: Fields:[ begin,] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:58:00.906+0100 D STORAGE [rsSync] WT commit_transaction 2016-12-09T20:58:00.906+0100 D STORAGE [rsSync] WT begin_transaction 2016-12-09T20:58:00.906+0100 D STORAGE [rsSync] WT commit_transaction 2016-12-09T20:58:00.906+0100 D STORAGE [rsSync] WT begin_transaction 2016-12-09T20:58:00.906+0100 D REPL [rsSync] returning initial sync flag value of 0 2016-12-09T20:58:00.906+0100 D REPL [rsSync] Removing temporary collections from app 2016-12-09T20:58:00.906+0100 D STORAGE [rsSync] looking up metadata for: app.test @ RecordId(7) 2016-12-09T20:58:00.906+0100 D STORAGE [rsSync] fetched CCE metadata: { md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-12--295440694794046494" }, ns: "app.test", ident: "collection-11--295440694794046494" } 2016-12-09T20:58:00.906+0100 D STORAGE [rsSync] returning metadata: md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:58:00.906+0100 I REPL [rsSync] transition to primary complete; database writes are now permitted 2016-12-09T20:58:00.906+0100 D STORAGE [rsSync] WT rollback_transaction 2016-12-09T20:58:01.006+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:01.007+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:01.129+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 91 } 2016-12-09T20:58:01.129+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:58:01.129+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 91 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:404 locks:{} protocol:op_command 0ms 2016-12-09T20:58:01.909+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:58969 #4 (3 connections now open) 2016-12-09T20:58:01.909+0100 D COMMAND [conn4] run command admin.$cmd { isMaster: 1 } 2016-12-09T20:58:01.909+0100 I COMMAND [conn4] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2016-12-09T20:58:01.909+0100 D QUERY [conn4] Running query: query: {} sort: {} projection: {} ntoreturn=1 2016-12-09T20:58:01.909+0100 D QUERY [conn4] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} ntoreturn=1, planSummary: COLLSCAN 2016-12-09T20:58:01.909+0100 D STORAGE [conn4] WT begin_transaction 2016-12-09T20:58:01.909+0100 D STORAGE [conn4] WT rollback_transaction 2016-12-09T20:58:01.909+0100 I COMMAND [conn4] query local.oplog.rs planSummary: COLLSCAN ntoreturn:1 ntoskip:0 keysExamined:0 docsExamined:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:106 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms 2016-12-09T20:58:01.910+0100 D NETWORK [conn4] Socket recv() conn closed? 127.0.0.1:58969 2016-12-09T20:58:01.910+0100 D NETWORK [conn4] SocketException: remote: 127.0.0.1:58969 error: 9001 socket exception [CLOSED] server [127.0.0.1:58969] 2016-12-09T20:58:01.910+0100 I NETWORK [conn4] end connection 127.0.0.1:58969 (2 connections now open) 2016-12-09T20:58:01.910+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:58970 #5 (3 connections now open) 2016-12-09T20:58:01.910+0100 D COMMAND [conn5] run command admin.$cmd { isMaster: 1 } 2016-12-09T20:58:01.911+0100 I COMMAND [conn5] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2016-12-09T20:58:01.911+0100 D COMMAND [conn5] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:58:01.911+0100 D COMMAND [conn5] command: replSetUpdatePosition 2016-12-09T20:58:01.911+0100 D REPL [conn5] received notification that node with memberID 0 in config with version 7 has reached optime: (term: 89, timestamp: Dec 9 20:57:39:2) and is durable through: (term: 89, timestamp: Dec 9 20:57:39:2) 2016-12-09T20:58:01.911+0100 D REPL [conn5] Node with memberID 0 currently has optime (term: 87, timestamp: Dec 9 20:55:30:2) durable through (term: 87, timestamp: Dec 9 20:55:30:2); updating to optime (term: 89, timestamp: Dec 9 20:57:39:2) and durable through (term: 89, timestamp: Dec 9 20:57:39:2) 2016-12-09T20:58:01.911+0100 D REPL [conn5] received notification that node with memberID 2 in config with version 7 has reached optime: (term: 89, timestamp: Dec 9 20:57:39:2) and is durable through: (term: 89, timestamp: Dec 9 20:57:39:2) 2016-12-09T20:58:01.911+0100 D REPL [conn5] Node with memberID 2 currently has optime (term: 89, timestamp: Dec 9 20:57:39:2) durable through (term: 89, timestamp: Dec 9 20:57:39:2); updating to optime (term: 89, timestamp: Dec 9 20:57:39:2) and durable through (term: 89, timestamp: Dec 9 20:57:39:2) 2016-12-09T20:58:01.911+0100 I COMMAND [conn5] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, memberId: 2, cfgver: 7 } ] } keyUpdates:0 writeConflicts:0 numYields:0 reslen:22 locks:{} protocol:op_command 0ms 2016-12-09T20:58:01.911+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:01.911+0100 2016-12-09T20:58:01.912+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:01.911+0100 2016-12-09T20:58:01.912+0100 D REPL [ReplicationExecutor] earliest member 0 date: 2016-12-09T20:58:01.911+0100 2016-12-09T20:58:01.912+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:58:11.911+0100 2016-12-09T20:58:01.912+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:58972 #6 (4 connections now open) 2016-12-09T20:58:01.913+0100 D COMMAND [conn6] run command admin.$cmd { isMaster: 1 } 2016-12-09T20:58:01.913+0100 I COMMAND [conn6] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2016-12-09T20:58:01.913+0100 D COMMAND [conn6] run command local.$cmd { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1481313459000|2 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 91 } 2016-12-09T20:58:01.913+0100 D STORAGE [conn6] WT begin_transaction 2016-12-09T20:58:01.913+0100 D QUERY [conn6] Using direct oplog seek 2016-12-09T20:58:01.913+0100 D WRITE [conn6] Caught WriteConflictException doing plan execution on local.oplog.rs, attempt: 1 retrying 2016-12-09T20:58:01.913+0100 D STORAGE [conn6] WT rollback_transaction 2016-12-09T20:58:01.913+0100 D STORAGE [conn6] WT begin_transaction 2016-12-09T20:58:01.913+0100 D STORAGE [conn6] WT rollback_transaction 2016-12-09T20:58:01.913+0100 I COMMAND [conn6] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1481313459000|2 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 91 } planSummary: COLLSCAN cursorid:15068268194 keysExamined:0 docsExamined:2 keyUpdates:0 writeConflicts:1 numYields:1 nreturned:2 reslen:505 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms 2016-12-09T20:58:01.914+0100 D COMMAND [conn5] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:58:01.914+0100 D COMMAND [conn5] command: replSetUpdatePosition 2016-12-09T20:58:01.914+0100 D REPL [conn5] received notification that node with memberID 0 in config with version 7 has reached optime: (term: 89, timestamp: Dec 9 20:57:39:2) and is durable through: (term: 89, timestamp: Dec 9 20:57:39:2) 2016-12-09T20:58:01.914+0100 D REPL [conn5] Node with memberID 0 currently has optime (term: 89, timestamp: Dec 9 20:57:39:2) durable through (term: 89, timestamp: Dec 9 20:57:39:2); updating to optime (term: 89, timestamp: Dec 9 20:57:39:2) and durable through (term: 89, timestamp: Dec 9 20:57:39:2) 2016-12-09T20:58:01.914+0100 D REPL [conn5] received notification that node with memberID 2 in config with version 7 has reached optime: (term: 91, timestamp: Dec 9 20:58:00:2) and is durable through: (term: 89, timestamp: Dec 9 20:57:39:2) 2016-12-09T20:58:01.914+0100 D REPL [conn5] Node with memberID 2 currently has optime (term: 89, timestamp: Dec 9 20:57:39:2) durable through (term: 89, timestamp: Dec 9 20:57:39:2); updating to optime (term: 91, timestamp: Dec 9 20:58:00:2) and durable through (term: 89, timestamp: Dec 9 20:57:39:2) 2016-12-09T20:58:01.915+0100 I COMMAND [conn5] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 2, cfgver: 7 } ] } keyUpdates:0 writeConflicts:0 numYields:0 reslen:22 locks:{} protocol:op_command 0ms 2016-12-09T20:58:01.915+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:01.914+0100 2016-12-09T20:58:01.915+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:01.915+0100 2016-12-09T20:58:01.915+0100 D REPL [ReplicationExecutor] earliest member 0 date: 2016-12-09T20:58:01.914+0100 2016-12-09T20:58:01.915+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:58:11.914+0100 2016-12-09T20:58:01.916+0100 D COMMAND [conn6] run command local.$cmd { getMore: 15068268194, collection: "oplog.rs", maxTimeMS: 5000, term: 91, lastKnownCommittedOpTime: { ts: Timestamp 1481313459000|2, t: 89 } } 2016-12-09T20:58:01.916+0100 D STORAGE [conn6] WT begin_transaction 2016-12-09T20:58:01.917+0100 D STORAGE [conn6] WT rollback_transaction 2016-12-09T20:58:01.918+0100 D COMMAND [conn5] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:58:01.918+0100 D COMMAND [conn5] command: replSetUpdatePosition 2016-12-09T20:58:01.918+0100 D REPL [conn5] received notification that node with memberID 0 in config with version 7 has reached optime: (term: 89, timestamp: Dec 9 20:57:39:2) and is durable through: (term: 89, timestamp: Dec 9 20:57:39:2) 2016-12-09T20:58:01.918+0100 D REPL [conn5] Node with memberID 0 currently has optime (term: 89, timestamp: Dec 9 20:57:39:2) durable through (term: 89, timestamp: Dec 9 20:57:39:2); updating to optime (term: 89, timestamp: Dec 9 20:57:39:2) and durable through (term: 89, timestamp: Dec 9 20:57:39:2) 2016-12-09T20:58:01.918+0100 D REPL [conn5] received notification that node with memberID 2 in config with version 7 has reached optime: (term: 91, timestamp: Dec 9 20:58:00:2) and is durable through: (term: 91, timestamp: Dec 9 20:58:00:2) 2016-12-09T20:58:01.919+0100 D REPL [conn5] Node with memberID 2 currently has optime (term: 91, timestamp: Dec 9 20:58:00:2) durable through (term: 89, timestamp: Dec 9 20:57:39:2); updating to optime (term: 91, timestamp: Dec 9 20:58:00:2) and durable through (term: 91, timestamp: Dec 9 20:58:00:2) 2016-12-09T20:58:01.919+0100 I COMMAND [conn5] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 2, cfgver: 7 } ] } keyUpdates:0 writeConflicts:0 numYields:0 reslen:22 locks:{} protocol:op_command 0ms 2016-12-09T20:58:01.919+0100 D STORAGE [conn6] WT begin_transaction 2016-12-09T20:58:01.919+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:01.918+0100 2016-12-09T20:58:01.919+0100 D STORAGE [conn6] WT rollback_transaction 2016-12-09T20:58:01.919+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:01.919+0100 2016-12-09T20:58:01.920+0100 D REPL [ReplicationExecutor] earliest member 0 date: 2016-12-09T20:58:01.918+0100 2016-12-09T20:58:01.920+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:58:11.918+0100 2016-12-09T20:58:01.920+0100 I COMMAND [conn6] command local.oplog.rs command: getMore { getMore: 15068268194, collection: "oplog.rs", maxTimeMS: 5000, term: 91, lastKnownCommittedOpTime: { ts: Timestamp 1481313459000|2, t: 89 } } planSummary: COLLSCAN cursorid:15068268194 keysExamined:0 docsExamined:0 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:0 reslen:292 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms 2016-12-09T20:58:01.920+0100 D COMMAND [conn6] run command local.$cmd { getMore: 15068268194, collection: "oplog.rs", maxTimeMS: 5000, term: 91, lastKnownCommittedOpTime: { ts: Timestamp 1481313480000|2, t: 91 } } 2016-12-09T20:58:01.920+0100 D STORAGE [conn6] WT begin_transaction 2016-12-09T20:58:01.920+0100 D STORAGE [conn6] WT rollback_transaction 2016-12-09T20:58:02.000+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:02.002+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:02.625+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 82 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:12.625+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 91 } 2016-12-09T20:58:02.626+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 82 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:12.625+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 91 } 2016-12-09T20:58:02.626+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 82 on host localhost:31002 2016-12-09T20:58:02.626+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 82 out: Operation aborted. 2016-12-09T20:58:02.626+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:02.626+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:04.626Z 2016-12-09T20:58:03.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:03.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:04.002+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:04.003+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:04.631+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 84 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:14.631+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 91 } 2016-12-09T20:58:04.632+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 84 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:14.631+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 91 } 2016-12-09T20:58:04.632+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 84 on host localhost:31002 2016-12-09T20:58:04.632+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 84 out: Operation aborted. 2016-12-09T20:58:04.632+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:04.632+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:06.632Z 2016-12-09T20:58:05.003+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:05.004+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:05.189+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 90 } 2016-12-09T20:58:05.189+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:58:05.189+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 90 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:404 locks:{} protocol:op_command 0ms 2016-12-09T20:58:05.212+0100 D COMMAND [conn5] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313470000|2, t: 90 }, appliedOpTime: { ts: Timestamp 1481313470000|2, t: 90 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:58:05.212+0100 D COMMAND [conn5] command: replSetUpdatePosition 2016-12-09T20:58:05.212+0100 D REPL [conn5] received notification that node with memberID 0 in config with version 7 has reached optime: (term: 90, timestamp: Dec 9 20:57:50:2) and is durable through: (term: 90, timestamp: Dec 9 20:57:50:2) 2016-12-09T20:58:05.212+0100 D REPL [conn5] Node with memberID 0 currently has optime (term: 89, timestamp: Dec 9 20:57:39:2) durable through (term: 89, timestamp: Dec 9 20:57:39:2); updating to optime (term: 90, timestamp: Dec 9 20:57:50:2) and durable through (term: 90, timestamp: Dec 9 20:57:50:2) 2016-12-09T20:58:05.212+0100 D REPL [conn5] received notification that node with memberID 2 in config with version 7 has reached optime: (term: 91, timestamp: Dec 9 20:58:00:2) and is durable through: (term: 91, timestamp: Dec 9 20:58:00:2) 2016-12-09T20:58:05.212+0100 D REPL [conn5] Node with memberID 2 currently has optime (term: 91, timestamp: Dec 9 20:58:00:2) durable through (term: 91, timestamp: Dec 9 20:58:00:2); updating to optime (term: 91, timestamp: Dec 9 20:58:00:2) and durable through (term: 91, timestamp: Dec 9 20:58:00:2) 2016-12-09T20:58:05.212+0100 I COMMAND [conn5] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313470000|2, t: 90 }, appliedOpTime: { ts: Timestamp 1481313470000|2, t: 90 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 2, cfgver: 7 } ] } keyUpdates:0 writeConflicts:0 numYields:0 reslen:22 locks:{} protocol:op_command 0ms 2016-12-09T20:58:05.212+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:05.212+0100 2016-12-09T20:58:05.212+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:05.212+0100 2016-12-09T20:58:05.212+0100 D REPL [ReplicationExecutor] earliest member 0 date: 2016-12-09T20:58:05.212+0100 2016-12-09T20:58:05.212+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:58:15.212+0100 2016-12-09T20:58:05.228+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:58978 #7 (5 connections now open) 2016-12-09T20:58:05.228+0100 D COMMAND [conn7] run command admin.$cmd { isMaster: 1 } 2016-12-09T20:58:05.229+0100 I COMMAND [conn7] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2016-12-09T20:58:05.229+0100 D QUERY [conn7] Running query: query: {} sort: {} projection: {} ntoreturn=1 2016-12-09T20:58:05.229+0100 D QUERY [conn7] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} ntoreturn=1, planSummary: COLLSCAN 2016-12-09T20:58:05.229+0100 D STORAGE [conn7] WT begin_transaction 2016-12-09T20:58:05.229+0100 D STORAGE [conn7] WT rollback_transaction 2016-12-09T20:58:05.229+0100 I COMMAND [conn7] query local.oplog.rs planSummary: COLLSCAN ntoreturn:1 ntoskip:0 keysExamined:0 docsExamined:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:106 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms 2016-12-09T20:58:05.229+0100 D QUERY [conn7] Running query: query: { ts: { $gte: Timestamp 1481313480000|2, $lte: Timestamp 1481313480000|2 } } sort: {} projection: {} 2016-12-09T20:58:05.229+0100 D STORAGE [conn7] WT begin_transaction 2016-12-09T20:58:05.229+0100 D QUERY [conn7] Using direct oplog seek 2016-12-09T20:58:05.229+0100 D WRITE [conn7] Caught WriteConflictException doing plan execution on local.oplog.rs, attempt: 1 retrying 2016-12-09T20:58:05.229+0100 D STORAGE [conn7] WT rollback_transaction 2016-12-09T20:58:05.229+0100 D STORAGE [conn7] WT begin_transaction 2016-12-09T20:58:05.229+0100 D STORAGE [conn7] WT rollback_transaction 2016-12-09T20:58:05.229+0100 I COMMAND [conn7] query local.oplog.rs query: { ts: { $gte: Timestamp 1481313480000|2, $lte: Timestamp 1481313480000|2 } } planSummary: COLLSCAN cursorid:14056283774 ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:1 keyUpdates:0 writeConflicts:1 numYields:1 nreturned:1 reslen:114 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } 0ms 2016-12-09T20:58:05.230+0100 D COMMAND [conn7] killcursors: found 1 of 1 2016-12-09T20:58:05.230+0100 I COMMAND [conn7] killcursors local.oplog.rs keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms 2016-12-09T20:58:05.230+0100 D NETWORK [conn7] Socket recv() conn closed? 127.0.0.1:58978 2016-12-09T20:58:05.230+0100 D NETWORK [conn7] SocketException: remote: 127.0.0.1:58978 error: 9001 socket exception [CLOSED] server [127.0.0.1:58978] 2016-12-09T20:58:05.230+0100 I NETWORK [conn7] end connection 127.0.0.1:58978 (4 connections now open) 2016-12-09T20:58:05.230+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:58979 #8 (5 connections now open) 2016-12-09T20:58:05.230+0100 D COMMAND [conn8] run command admin.$cmd { isMaster: 1 } 2016-12-09T20:58:05.230+0100 I COMMAND [conn8] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2016-12-09T20:58:05.231+0100 D COMMAND [conn8] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:58:05.231+0100 D COMMAND [conn8] command: replSetUpdatePosition 2016-12-09T20:58:05.231+0100 D REPL [conn8] received notification that node with memberID 0 in config with version 7 has reached optime: (term: 89, timestamp: Dec 9 20:57:39:2) and is durable through: (term: 89, timestamp: Dec 9 20:57:39:2) 2016-12-09T20:58:05.231+0100 D REPL [conn8] Node with memberID 0 currently has optime (term: 90, timestamp: Dec 9 20:57:50:2) durable through (term: 90, timestamp: Dec 9 20:57:50:2); updating to optime (term: 89, timestamp: Dec 9 20:57:39:2) and durable through (term: 89, timestamp: Dec 9 20:57:39:2) 2016-12-09T20:58:05.231+0100 D REPL [conn8] received notification that node with memberID 2 in config with version 7 has reached optime: (term: 91, timestamp: Dec 9 20:58:00:2) and is durable through: (term: 91, timestamp: Dec 9 20:58:00:2) 2016-12-09T20:58:05.231+0100 D REPL [conn8] Node with memberID 2 currently has optime (term: 91, timestamp: Dec 9 20:58:00:2) durable through (term: 91, timestamp: Dec 9 20:58:00:2); updating to optime (term: 91, timestamp: Dec 9 20:58:00:2) and durable through (term: 91, timestamp: Dec 9 20:58:00:2) 2016-12-09T20:58:05.231+0100 I COMMAND [conn8] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 2, cfgver: 7 } ] } keyUpdates:0 writeConflicts:0 numYields:0 reslen:22 locks:{} protocol:op_command 0ms 2016-12-09T20:58:05.231+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:05.231+0100 2016-12-09T20:58:05.231+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:05.231+0100 2016-12-09T20:58:05.231+0100 D REPL [ReplicationExecutor] earliest member 0 date: 2016-12-09T20:58:05.231+0100 2016-12-09T20:58:05.231+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:58:15.231+0100 2016-12-09T20:58:05.232+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:58981 #9 (6 connections now open) 2016-12-09T20:58:05.232+0100 D COMMAND [conn9] run command admin.$cmd { isMaster: 1 } 2016-12-09T20:58:05.232+0100 I COMMAND [conn9] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2016-12-09T20:58:05.232+0100 D COMMAND [conn9] run command local.$cmd { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1481313459000|2 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 91 } 2016-12-09T20:58:05.232+0100 D STORAGE [conn9] WT begin_transaction 2016-12-09T20:58:05.232+0100 D QUERY [conn9] Using direct oplog seek 2016-12-09T20:58:05.232+0100 D WRITE [conn9] Caught WriteConflictException doing plan execution on local.oplog.rs, attempt: 1 retrying 2016-12-09T20:58:05.232+0100 D STORAGE [conn9] WT rollback_transaction 2016-12-09T20:58:05.232+0100 D STORAGE [conn9] WT begin_transaction 2016-12-09T20:58:05.232+0100 D STORAGE [conn9] WT rollback_transaction 2016-12-09T20:58:05.232+0100 I COMMAND [conn9] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1481313459000|2 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 91 } planSummary: COLLSCAN cursorid:16300149474 keysExamined:0 docsExamined:2 keyUpdates:0 writeConflicts:1 numYields:1 nreturned:2 reslen:505 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms 2016-12-09T20:58:05.233+0100 D COMMAND [conn8] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:58:05.233+0100 D COMMAND [conn8] command: replSetUpdatePosition 2016-12-09T20:58:05.233+0100 D REPL [conn8] received notification that node with memberID 0 in config with version 7 has reached optime: (term: 91, timestamp: Dec 9 20:58:00:2) and is durable through: (term: 89, timestamp: Dec 9 20:57:39:2) 2016-12-09T20:58:05.233+0100 D REPL [conn8] Node with memberID 0 currently has optime (term: 90, timestamp: Dec 9 20:57:50:2) durable through (term: 90, timestamp: Dec 9 20:57:50:2); updating to optime (term: 91, timestamp: Dec 9 20:58:00:2) and durable through (term: 89, timestamp: Dec 9 20:57:39:2) 2016-12-09T20:58:05.233+0100 D REPL [conn8] received notification that node with memberID 2 in config with version 7 has reached optime: (term: 91, timestamp: Dec 9 20:58:00:2) and is durable through: (term: 91, timestamp: Dec 9 20:58:00:2) 2016-12-09T20:58:05.233+0100 D REPL [conn8] Node with memberID 2 currently has optime (term: 91, timestamp: Dec 9 20:58:00:2) durable through (term: 91, timestamp: Dec 9 20:58:00:2); updating to optime (term: 91, timestamp: Dec 9 20:58:00:2) and durable through (term: 91, timestamp: Dec 9 20:58:00:2) 2016-12-09T20:58:05.233+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:05.233+0100 2016-12-09T20:58:05.233+0100 I COMMAND [conn8] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 2, cfgver: 7 } ] } keyUpdates:0 writeConflicts:0 numYields:0 reslen:22 locks:{} protocol:op_command 0ms 2016-12-09T20:58:05.234+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:05.233+0100 2016-12-09T20:58:05.234+0100 D REPL [ReplicationExecutor] earliest member 0 date: 2016-12-09T20:58:05.233+0100 2016-12-09T20:58:05.234+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:58:15.233+0100 2016-12-09T20:58:05.235+0100 D COMMAND [conn9] run command local.$cmd { killCursors: "oplog.rs", cursors: [ 16300149474 ] } 2016-12-09T20:58:05.236+0100 I COMMAND [conn9] command local.oplog.rs command: killCursors { killCursors: "oplog.rs", cursors: [ 16300149474 ] } keyUpdates:0 writeConflicts:0 numYields:0 reslen:115 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms 2016-12-09T20:58:05.236+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 91 } 2016-12-09T20:58:05.236+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:58:05.236+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 91 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:404 locks:{} protocol:op_command 0ms 2016-12-09T20:58:05.237+0100 D NETWORK [conn8] Socket recv() conn closed? 127.0.0.1:58979 2016-12-09T20:58:05.237+0100 D NETWORK [conn8] SocketException: remote: 127.0.0.1:58979 error: 9001 socket exception [CLOSED] server [127.0.0.1:58979] 2016-12-09T20:58:05.237+0100 I NETWORK [conn8] end connection 127.0.0.1:58979 (5 connections now open) 2016-12-09T20:58:06.002+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:06.004+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:06.129+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 91 } 2016-12-09T20:58:06.129+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:58:06.130+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 91 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:404 locks:{} protocol:op_command 0ms 2016-12-09T20:58:06.636+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 86 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:16.636+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 91 } 2016-12-09T20:58:06.636+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 86 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:16.636+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 91 } 2016-12-09T20:58:06.636+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 86 on host localhost:31002 2016-12-09T20:58:06.637+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 86 out: Operation aborted. 2016-12-09T20:58:06.637+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:06.637+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:08.637Z 2016-12-09T20:58:06.922+0100 D STORAGE [conn6] WT begin_transaction 2016-12-09T20:58:06.922+0100 D STORAGE [conn6] WT rollback_transaction 2016-12-09T20:58:06.922+0100 I COMMAND [conn6] command local.oplog.rs command: getMore { getMore: 15068268194, collection: "oplog.rs", maxTimeMS: 5000, term: 91, lastKnownCommittedOpTime: { ts: Timestamp 1481313480000|2, t: 91 } } planSummary: COLLSCAN cursorid:15068268194 keysExamined:0 docsExamined:0 keyUpdates:0 writeConflicts:0 numYields:1 nreturned:0 reslen:292 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 5001ms 2016-12-09T20:58:06.924+0100 D COMMAND [conn6] run command local.$cmd { getMore: 15068268194, collection: "oplog.rs", maxTimeMS: 5000, term: 91, lastKnownCommittedOpTime: { ts: Timestamp 1481313480000|2, t: 91 } } 2016-12-09T20:58:06.924+0100 D STORAGE [conn6] WT begin_transaction 2016-12-09T20:58:06.924+0100 D STORAGE [conn6] WT rollback_transaction 2016-12-09T20:58:07.000+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:07.001+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:08.001+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:08.002+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:08.136+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 91 } 2016-12-09T20:58:08.136+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:58:08.136+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 91 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:404 locks:{} protocol:op_command 0ms 2016-12-09T20:58:08.638+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 88 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:18.638+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 91 } 2016-12-09T20:58:08.638+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 88 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:18.638+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 91 } 2016-12-09T20:58:08.638+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 88 on host localhost:31002 2016-12-09T20:58:08.639+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 88 out: Operation aborted. 2016-12-09T20:58:08.639+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:08.639+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:10.639Z 2016-12-09T20:58:09.001+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:09.002+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:09.948+0100 D INDEX [TTLMonitor] TTLMonitor thread awake 2016-12-09T20:58:09.948+0100 D COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms 2016-12-09T20:58:09.951+0100 D - [PeriodicTaskRunner] cleaning up unused lock buckets of the global lock manager 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] looking up metadata for: app.test @ RecordId(7) 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] WT begin_transaction 2016-12-09T20:58:09.951+0100 D COMMAND [PeriodicTaskRunner] task: UnusedLockCleaner took: 0ms 2016-12-09T20:58:09.951+0100 D COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] fetched CCE metadata: { md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-12--295440694794046494" }, ns: "app.test", ident: "collection-11--295440694794046494" } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] returning metadata: md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] looking up metadata for: app.test @ RecordId(7) 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] fetched CCE metadata: { md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-12--295440694794046494" }, ns: "app.test", ident: "collection-11--295440694794046494" } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] returning metadata: md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] WT rollback_transaction 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] looking up metadata for: local.me @ RecordId(1) 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] WT begin_transaction 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] fetched CCE metadata: { md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-1--295440694794046494" }, ns: "local.me", ident: "collection-0--295440694794046494" } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] returning metadata: md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] looking up metadata for: local.me @ RecordId(1) 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] fetched CCE metadata: { md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-1--295440694794046494" }, ns: "local.me", ident: "collection-0--295440694794046494" } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] returning metadata: md: { ns: "local.me", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.me" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] looking up metadata for: local.oplog.rs @ RecordId(4) 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] fetched CCE metadata: { ns: "local.oplog.rs", ident: "collection-6--295440694794046494", md: { ns: "local.oplog.rs", options: { capped: true, size: 201326592, autoIndexId: false }, indexes: [] } } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] returning metadata: md: { ns: "local.oplog.rs", options: { capped: true, size: 201326592, autoIndexId: false }, indexes: [] } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] looking up metadata for: local.replset.election @ RecordId(6) 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] fetched CCE metadata: { md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-10--295440694794046494" }, ns: "local.replset.election", ident: "collection-9--295440694794046494" } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] returning metadata: md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] looking up metadata for: local.replset.election @ RecordId(6) 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] fetched CCE metadata: { md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-10--295440694794046494" }, ns: "local.replset.election", ident: "collection-9--295440694794046494" } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] returning metadata: md: { ns: "local.replset.election", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.election" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] looking up metadata for: local.replset.minvalid @ RecordId(5) 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-8--295440694794046494" }, ns: "local.replset.minvalid", ident: "collection-7--295440694794046494" } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] returning metadata: md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] looking up metadata for: local.replset.minvalid @ RecordId(5) 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] fetched CCE metadata: { md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-8--295440694794046494" }, ns: "local.replset.minvalid", ident: "collection-7--295440694794046494" } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] returning metadata: md: { ns: "local.replset.minvalid", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.replset.minvalid" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] looking up metadata for: local.startup_log @ RecordId(2) 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] fetched CCE metadata: { md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-3--295440694794046494" }, ns: "local.startup_log", ident: "collection-2--295440694794046494" } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] returning metadata: md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] looking up metadata for: local.startup_log @ RecordId(2) 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] fetched CCE metadata: { md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-3--295440694794046494" }, ns: "local.startup_log", ident: "collection-2--295440694794046494" } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] returning metadata: md: { ns: "local.startup_log", options: { capped: true, size: 10485760 }, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.startup_log" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] looking up metadata for: local.system.replset @ RecordId(3) 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] fetched CCE metadata: { md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-5--295440694794046494" }, ns: "local.system.replset", ident: "collection-4--295440694794046494" } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] returning metadata: md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] looking up metadata for: local.system.replset @ RecordId(3) 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] fetched CCE metadata: { md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-5--295440694794046494" }, ns: "local.system.replset", ident: "collection-4--295440694794046494" } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] returning metadata: md: { ns: "local.system.replset", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "local.system.replset" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:58:09.951+0100 D STORAGE [TTLMonitor] WT rollback_transaction 2016-12-09T20:58:09.953+0100 D NETWORK [HostnameCanonicalizationWorker] Hostname Canonicalizer is acquiring host FQDNs 2016-12-09T20:58:09.956+0100 D NETWORK [HostnameCanonicalizationWorker] Hostname Canonicalizer acquired FQDNs 2016-12-09T20:58:10.001+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:10.003+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:10.138+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 91 } 2016-12-09T20:58:10.138+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:58:10.139+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 91 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:404 locks:{} protocol:op_command 0ms 2016-12-09T20:58:10.212+0100 D COMMAND [conn5] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313470000|2, t: 90 }, appliedOpTime: { ts: Timestamp 1481313470000|2, t: 90 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:58:10.212+0100 D COMMAND [conn5] command: replSetUpdatePosition 2016-12-09T20:58:10.212+0100 D REPL [conn5] received notification that node with memberID 0 in config with version 7 has reached optime: (term: 90, timestamp: Dec 9 20:57:50:2) and is durable through: (term: 90, timestamp: Dec 9 20:57:50:2) 2016-12-09T20:58:10.213+0100 D REPL [conn5] Node with memberID 0 currently has optime (term: 91, timestamp: Dec 9 20:58:00:2) durable through (term: 90, timestamp: Dec 9 20:57:50:2); updating to optime (term: 90, timestamp: Dec 9 20:57:50:2) and durable through (term: 90, timestamp: Dec 9 20:57:50:2) 2016-12-09T20:58:10.213+0100 D REPL [conn5] received notification that node with memberID 2 in config with version 7 has reached optime: (term: 91, timestamp: Dec 9 20:58:00:2) and is durable through: (term: 91, timestamp: Dec 9 20:58:00:2) 2016-12-09T20:58:10.213+0100 D REPL [conn5] Node with memberID 2 currently has optime (term: 91, timestamp: Dec 9 20:58:00:2) durable through (term: 91, timestamp: Dec 9 20:58:00:2); updating to optime (term: 91, timestamp: Dec 9 20:58:00:2) and durable through (term: 91, timestamp: Dec 9 20:58:00:2) 2016-12-09T20:58:10.213+0100 I COMMAND [conn5] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313470000|2, t: 90 }, appliedOpTime: { ts: Timestamp 1481313470000|2, t: 90 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, appliedOpTime: { ts: Timestamp 1481313459000|2, t: 89 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 2, cfgver: 7 } ] } keyUpdates:0 writeConflicts:0 numYields:0 reslen:22 locks:{} protocol:op_command 0ms 2016-12-09T20:58:10.213+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:10.213+0100 2016-12-09T20:58:10.213+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:10.213+0100 2016-12-09T20:58:10.213+0100 D REPL [ReplicationExecutor] earliest member 0 date: 2016-12-09T20:58:10.213+0100 2016-12-09T20:58:10.213+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:58:20.213+0100 2016-12-09T20:58:10.238+0100 D COMMAND [conn3] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 91 } 2016-12-09T20:58:10.238+0100 D COMMAND [conn3] command: replSetHeartbeat 2016-12-09T20:58:10.238+0100 I COMMAND [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 91 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:404 locks:{} protocol:op_command 0ms 2016-12-09T20:58:10.621+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to get connection from pool for request 70: ExceededTimeLimit: Couldn't get a connection within the time limit 2016-12-09T20:58:10.621+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to get connection from pool for request 74: ExceededTimeLimit: Couldn't get a connection within the time limit 2016-12-09T20:58:10.621+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to get connection from pool for request 78: ExceededTimeLimit: Couldn't get a connection within the time limit 2016-12-09T20:58:10.621+0100 I REPL [ReplicationExecutor] Error in heartbeat request to localhost:12345; ExceededTimeLimit: Couldn't get a connection within the time limit 2016-12-09T20:58:10.622+0100 D REPL [ReplicationExecutor] setDownValues: heartbeat response failed for member _id:0, msg: Couldn't get a connection within the time limit 2016-12-09T20:58:10.622+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:12345 at 2016-12-09T19:58:12.621Z 2016-12-09T20:58:10.640+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 90 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:20.640+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 91 } 2016-12-09T20:58:10.641+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 90 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:20.640+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 91 } 2016-12-09T20:58:10.641+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 90 on host localhost:31002 2016-12-09T20:58:10.641+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 90 out: Operation aborted. 2016-12-09T20:58:10.641+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:10.641+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:12.641Z 2016-12-09T20:58:11.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:11.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:11.928+0100 D STORAGE [conn6] WT begin_transaction 2016-12-09T20:58:11.928+0100 D STORAGE [conn6] WT rollback_transaction 2016-12-09T20:58:11.928+0100 I COMMAND [conn6] command local.oplog.rs command: getMore { getMore: 15068268194, collection: "oplog.rs", maxTimeMS: 5000, term: 91, lastKnownCommittedOpTime: { ts: Timestamp 1481313480000|2, t: 91 } } planSummary: COLLSCAN cursorid:15068268194 keysExamined:0 docsExamined:0 keyUpdates:0 writeConflicts:0 numYields:1 nreturned:0 reslen:292 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 5004ms 2016-12-09T20:58:11.929+0100 D COMMAND [conn6] run command local.$cmd { getMore: 15068268194, collection: "oplog.rs", maxTimeMS: 5000, term: 91, lastKnownCommittedOpTime: { ts: Timestamp 1481313480000|2, t: 91 } } 2016-12-09T20:58:11.929+0100 D STORAGE [conn6] WT begin_transaction 2016-12-09T20:58:11.929+0100 D STORAGE [conn6] WT rollback_transaction 2016-12-09T20:58:12.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:12.007+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:12.139+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 91 } 2016-12-09T20:58:12.139+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:58:12.139+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 91 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:404 locks:{} protocol:op_command 0ms 2016-12-09T20:58:12.627+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 92 -- target:localhost:12345 db:admin expDate:2016-12-09T20:58:22.626+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 91 } 2016-12-09T20:58:12.644+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 93 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:22.644+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 91 } 2016-12-09T20:58:12.645+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 93 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:22.644+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 91 } 2016-12-09T20:58:12.645+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 93 on host localhost:31002 2016-12-09T20:58:12.645+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 93 out: Operation aborted. 2016-12-09T20:58:12.645+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:12.645+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:14.645Z 2016-12-09T20:58:13.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:13.007+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:14.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:14.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:14.141+0100 D COMMAND [conn2] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 91 } 2016-12-09T20:58:14.141+0100 D COMMAND [conn2] command: replSetHeartbeat 2016-12-09T20:58:14.141+0100 I COMMAND [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 91 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:404 locks:{} protocol:op_command 0ms 2016-12-09T20:58:14.650+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 95 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:24.650+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 91 } 2016-12-09T20:58:14.651+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 95 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:24.650+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 91 } 2016-12-09T20:58:14.651+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 95 on host localhost:31002 2016-12-09T20:58:14.652+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 95 out: Operation aborted. 2016-12-09T20:58:14.652+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:14.652+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:16.652Z 2016-12-09T20:58:15.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:15.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:15.190+0100 D COMMAND [conn3] run command admin.$cmd { replSetRequestVotes: 1, setName: "rs", dryRun: true, term: 91, candidateIndex: 0, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313480000|2, t: 91 } } 2016-12-09T20:58:15.190+0100 D COMMAND [conn3] command: replSetRequestVotes 2016-12-09T20:58:15.190+0100 D QUERY [conn3] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:58:15.190+0100 D STORAGE [conn3] WT begin_transaction 2016-12-09T20:58:15.190+0100 D WRITE [conn3] update validate options -- updatedFields: Fields:[ ] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:58:15.190+0100 D STORAGE [conn3] WT commit_transaction 2016-12-09T20:58:15.190+0100 D STORAGE [conn3] WT begin_transaction 2016-12-09T20:58:15.190+0100 D STORAGE [conn3] WT rollback_transaction 2016-12-09T20:58:15.190+0100 I COMMAND [conn3] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "rs", dryRun: true, term: 91, candidateIndex: 0, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313480000|2, t: 91 } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:63 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms 2016-12-09T20:58:15.192+0100 D COMMAND [conn3] run command admin.$cmd { replSetRequestVotes: 1, setName: "rs", dryRun: false, term: 92, candidateIndex: 0, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313480000|2, t: 91 } } 2016-12-09T20:58:15.192+0100 D COMMAND [conn3] command: replSetRequestVotes 2016-12-09T20:58:15.192+0100 I REPL [ReplicationExecutor] stepping down from primary, because a new term has begun: 92 2016-12-09T20:58:15.192+0100 D EXECUTOR [replExecDBWorker-1] Executing a task on behalf of pool replExecDBWorker-Pool 2016-12-09T20:58:15.192+0100 I REPL [replExecDBWorker-1] transition to SECONDARY 2016-12-09T20:58:15.192+0100 D NETWORK [replExecDBWorker-1] Closing connection # 9 2016-12-09T20:58:15.193+0100 D NETWORK [replExecDBWorker-1] Closing connection # 5 2016-12-09T20:58:15.193+0100 D NETWORK [conn9] Socket recv() conn closed? 127.0.0.1:58981 2016-12-09T20:58:15.193+0100 D NETWORK [conn5] Socket recv() conn closed? 127.0.0.1:58970 2016-12-09T20:58:15.193+0100 D NETWORK [replExecDBWorker-1] Closing connection # 6 2016-12-09T20:58:15.193+0100 D NETWORK [conn9] SocketException: remote: 127.0.0.1:58981 error: 9001 socket exception [CLOSED] server [127.0.0.1:58981] 2016-12-09T20:58:15.193+0100 D NETWORK [conn5] SocketException: remote: 127.0.0.1:58970 error: 9001 socket exception [CLOSED] server [127.0.0.1:58970] 2016-12-09T20:58:15.193+0100 D NETWORK [replExecDBWorker-1] Skip closing connection # 3 2016-12-09T20:58:15.193+0100 I NETWORK [conn9] end connection 127.0.0.1:58981 (4 connections now open) 2016-12-09T20:58:15.193+0100 D NETWORK [replExecDBWorker-1] Closing connection # 2 2016-12-09T20:58:15.193+0100 I NETWORK [conn5] end connection 127.0.0.1:58970 (4 connections now open) 2016-12-09T20:58:15.193+0100 D NETWORK [conn2] Socket recv() conn closed? 127.0.0.1:58885 2016-12-09T20:58:15.193+0100 D EXECUTOR [replExecDBWorker-1] waiting for work; I am one of 3 thread(s); the minimum number of threads is 3 2016-12-09T20:58:15.193+0100 D NETWORK [conn2] SocketException: remote: 127.0.0.1:58885 error: 9001 socket exception [CLOSED] server [127.0.0.1:58885] 2016-12-09T20:58:15.193+0100 I NETWORK [conn2] end connection 127.0.0.1:58885 (2 connections now open) 2016-12-09T20:58:15.193+0100 D QUERY [conn3] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:58:15.193+0100 D STORAGE [conn3] WT begin_transaction 2016-12-09T20:58:15.193+0100 D WRITE [conn3] update validate options -- updatedFields: Fields:[ ] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:58:15.193+0100 D STORAGE [conn3] WT commit_transaction 2016-12-09T20:58:15.193+0100 D STORAGE [conn3] WT begin_transaction 2016-12-09T20:58:15.193+0100 D STORAGE [conn3] WT rollback_transaction 2016-12-09T20:58:15.193+0100 I COMMAND [conn3] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "rs", dryRun: false, term: 92, candidateIndex: 0, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313480000|2, t: 91 } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:63 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms 2016-12-09T20:58:15.193+0100 D NETWORK [conn3] Socket recv() conn closed? 127.0.0.1:58888 2016-12-09T20:58:15.193+0100 D NETWORK [conn3] SocketException: remote: 127.0.0.1:58888 error: 9001 socket exception [CLOSED] server [127.0.0.1:58888] 2016-12-09T20:58:15.193+0100 I NETWORK [conn3] end connection 127.0.0.1:58888 (1 connection now open) 2016-12-09T20:58:15.195+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:58985 #10 (2 connections now open) 2016-12-09T20:58:15.195+0100 D COMMAND [conn10] run command admin.$cmd { isMaster: 1 } 2016-12-09T20:58:15.195+0100 I COMMAND [conn10] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:327 locks:{} protocol:op_query 0ms 2016-12-09T20:58:15.196+0100 D COMMAND [conn10] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 92 } 2016-12-09T20:58:15.196+0100 D COMMAND [conn10] command: replSetHeartbeat 2016-12-09T20:58:15.196+0100 I COMMAND [conn10] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 92 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:58:15.196+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:58987 #11 (3 connections now open) 2016-12-09T20:58:15.196+0100 D COMMAND [conn11] run command admin.$cmd { isMaster: 1 } 2016-12-09T20:58:15.196+0100 I COMMAND [conn11] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:327 locks:{} protocol:op_query 0ms 2016-12-09T20:58:15.196+0100 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 92 } 2016-12-09T20:58:15.196+0100 D COMMAND [conn11] command: replSetHeartbeat 2016-12-09T20:58:15.197+0100 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 92 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:58:15.951+0100 D STORAGE [rsBackgroundSync] WT begin_transaction 2016-12-09T20:58:15.952+0100 D STORAGE [rsBackgroundSync] WT rollback_transaction 2016-12-09T20:58:15.952+0100 D REPL [rsBackgroundSync] bgsync fetch queue set to: (term: 91, timestamp: Dec 9 20:58:00:2) 6987913962352822846 2016-12-09T20:58:16.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:16.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:16.654+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 97 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:26.654+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 91 } 2016-12-09T20:58:16.654+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 97 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:26.654+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 91 } 2016-12-09T20:58:16.654+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 97 on host localhost:31002 2016-12-09T20:58:16.655+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 97 out: Operation aborted. 2016-12-09T20:58:16.655+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:16.655+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:21.655Z 2016-12-09T20:58:16.934+0100 D STORAGE [conn6] WT begin_transaction 2016-12-09T20:58:16.935+0100 D STORAGE [conn6] WT rollback_transaction 2016-12-09T20:58:16.935+0100 I COMMAND [conn6] command local.oplog.rs command: getMore { getMore: 15068268194, collection: "oplog.rs", maxTimeMS: 5000, term: 91, lastKnownCommittedOpTime: { ts: Timestamp 1481313480000|2, t: 91 } } planSummary: COLLSCAN cursorid:15068268194 keysExamined:0 docsExamined:0 keyUpdates:0 writeConflicts:0 numYields:1 nreturned:0 reslen:292 locks:{ Global: { acquireCount: { r: 6 } }, Database: { acquireCount: { r: 3 } }, oplog: { acquireCount: { r: 3 } } } protocol:op_command 5005ms 2016-12-09T20:58:16.935+0100 D NETWORK [conn6] Socket say send() errno:9 Bad file descriptor 127.0.0.1:58972 2016-12-09T20:58:16.935+0100 I NETWORK [conn6] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [127.0.0.1:58972] 2016-12-09T20:58:17.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:17.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:17.196+0100 D COMMAND [conn10] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 92 } 2016-12-09T20:58:17.196+0100 D COMMAND [conn10] command: replSetHeartbeat 2016-12-09T20:58:17.196+0100 I COMMAND [conn10] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 92 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:58:18.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:18.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:19.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:19.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:19.201+0100 D COMMAND [conn10] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 92 } 2016-12-09T20:58:19.201+0100 D COMMAND [conn10] command: replSetHeartbeat 2016-12-09T20:58:19.201+0100 I COMMAND [conn10] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 92 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:58:20.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:20.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:20.199+0100 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 92 } 2016-12-09T20:58:20.199+0100 D COMMAND [conn11] command: replSetHeartbeat 2016-12-09T20:58:20.199+0100 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 92 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:58:20.217+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:19.201+0100 2016-12-09T20:58:20.217+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:20.199+0100 2016-12-09T20:58:20.217+0100 D REPL [ReplicationExecutor] earliest member 0 date: 2016-12-09T20:58:19.201+0100 2016-12-09T20:58:20.217+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:58:29.201+0100 2016-12-09T20:58:20.620+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Failed to connect to localhost:12345 - ExceededTimeLimit: Operation timed out 2016-12-09T20:58:20.620+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 72 -- target:localhost:12345 db:admin cmd:{ isMaster: 1 } reason: ExceededTimeLimit: Operation timed out 2016-12-09T20:58:20.620+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to get connection from pool for request 92: ExceededTimeLimit: Operation timed out 2016-12-09T20:58:20.620+0100 I REPL [ReplicationExecutor] Error in heartbeat request to localhost:12345; ExceededTimeLimit: Operation timed out 2016-12-09T20:58:20.620+0100 D REPL [ReplicationExecutor] Bad heartbeat response from localhost:12345; trying again; Retries left: 1; 7994ms have already elapsed 2016-12-09T20:58:20.620+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Failed to connect to localhost:12345 - ExceededTimeLimit: Operation timed out 2016-12-09T20:58:20.620+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:12345 at 2016-12-09T19:58:20.620Z 2016-12-09T20:58:20.620+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 76 -- target:localhost:12345 db:admin cmd:{ isMaster: 1 } reason: ExceededTimeLimit: Operation timed out 2016-12-09T20:58:20.620+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 99 -- target:localhost:12345 db:admin expDate:2016-12-09T20:58:22.626+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 92 } 2016-12-09T20:58:20.620+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Failed to connect to localhost:12345 - ExceededTimeLimit: Operation timed out 2016-12-09T20:58:20.620+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 79 -- target:localhost:12345 db:admin cmd:{ isMaster: 1 } reason: ExceededTimeLimit: Operation timed out 2016-12-09T20:58:20.620+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to localhost:12345 2016-12-09T20:58:20.622+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 100 on host localhost:12345 2016-12-09T20:58:20.763+0100 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:_mdb_catalog -> { numRecords: 7, dataSize: 2032 } 2016-12-09T20:58:20.763+0100 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-0--295440694794046494 -> { numRecords: 1, dataSize: 61 } 2016-12-09T20:58:20.763+0100 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-11--295440694794046494 -> { numRecords: 44443, dataSize: 1288847 } 2016-12-09T20:58:20.763+0100 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-2--295440694794046494 -> { numRecords: 9, dataSize: 14051 } 2016-12-09T20:58:20.763+0100 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-4--295440694794046494 -> { numRecords: 1, dataSize: 705 } 2016-12-09T20:58:20.763+0100 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-6--295440694794046494 -> { numRecords: 44518, dataSize: 4673639 } 2016-12-09T20:58:20.763+0100 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-7--295440694794046494 -> { numRecords: 1, dataSize: 75 } 2016-12-09T20:58:20.763+0100 D STORAGE [WTJournalFlusher] WiredTigerSizeStorer::storeInto table:collection-9--295440694794046494 -> { numRecords: 1, dataSize: 60 } 2016-12-09T20:58:21.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:21.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:21.205+0100 D COMMAND [conn10] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 92 } 2016-12-09T20:58:21.205+0100 D COMMAND [conn10] command: replSetHeartbeat 2016-12-09T20:58:21.206+0100 I COMMAND [conn10] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 92 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:58:21.659+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 101 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:31.659+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 92 } 2016-12-09T20:58:21.659+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 101 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:31.659+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 92 } 2016-12-09T20:58:21.659+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 101 on host localhost:31002 2016-12-09T20:58:21.659+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 101 out: Operation aborted. 2016-12-09T20:58:21.660+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:21.660+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:26.660Z 2016-12-09T20:58:22.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:22.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:22.628+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to get connection from pool for request 99: ExceededTimeLimit: Couldn't get a connection within the time limit 2016-12-09T20:58:22.628+0100 I REPL [ReplicationExecutor] Error in heartbeat request to localhost:12345; ExceededTimeLimit: Couldn't get a connection within the time limit 2016-12-09T20:58:22.628+0100 D REPL [ReplicationExecutor] setDownValues: heartbeat response failed for member _id:0, msg: Couldn't get a connection within the time limit 2016-12-09T20:58:22.628+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:12345 at 2016-12-09T19:58:27.628Z 2016-12-09T20:58:23.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:23.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:23.211+0100 D COMMAND [conn10] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 92 } 2016-12-09T20:58:23.211+0100 D COMMAND [conn10] command: replSetHeartbeat 2016-12-09T20:58:23.211+0100 I COMMAND [conn10] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 92 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:58:24.001+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:24.002+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:25.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:25.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:25.201+0100 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 92 } 2016-12-09T20:58:25.201+0100 D COMMAND [conn11] command: replSetHeartbeat 2016-12-09T20:58:25.201+0100 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 92 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:58:25.213+0100 D COMMAND [conn10] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 92 } 2016-12-09T20:58:25.213+0100 D COMMAND [conn10] command: replSetHeartbeat 2016-12-09T20:58:25.213+0100 I COMMAND [conn10] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 92 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:58:26.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:26.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:26.126+0100 D COMMAND [conn11] run command admin.$cmd { replSetRequestVotes: 1, setName: "rs", dryRun: true, term: 92, candidateIndex: 2, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313480000|2, t: 91 } } 2016-12-09T20:58:26.126+0100 D COMMAND [conn11] command: replSetRequestVotes 2016-12-09T20:58:26.127+0100 D QUERY [conn11] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:58:26.127+0100 D STORAGE [conn11] WT begin_transaction 2016-12-09T20:58:26.127+0100 D WRITE [conn11] update validate options -- updatedFields: Fields:[ ] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:58:26.127+0100 D STORAGE [conn11] WT commit_transaction 2016-12-09T20:58:26.127+0100 D STORAGE [conn11] WT begin_transaction 2016-12-09T20:58:26.127+0100 D STORAGE [conn11] WT rollback_transaction 2016-12-09T20:58:26.127+0100 I COMMAND [conn11] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "rs", dryRun: true, term: 92, candidateIndex: 2, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313480000|2, t: 91 } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:63 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms 2016-12-09T20:58:26.128+0100 D COMMAND [conn11] run command admin.$cmd { replSetRequestVotes: 1, setName: "rs", dryRun: false, term: 93, candidateIndex: 2, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313480000|2, t: 91 } } 2016-12-09T20:58:26.128+0100 D COMMAND [conn11] command: replSetRequestVotes 2016-12-09T20:58:26.128+0100 D QUERY [conn11] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:58:26.128+0100 D STORAGE [conn11] WT begin_transaction 2016-12-09T20:58:26.128+0100 D WRITE [conn11] update validate options -- updatedFields: Fields:[ ] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:58:26.128+0100 D STORAGE [conn11] WT commit_transaction 2016-12-09T20:58:26.128+0100 D STORAGE [conn11] WT begin_transaction 2016-12-09T20:58:26.128+0100 D STORAGE [conn11] WT rollback_transaction 2016-12-09T20:58:26.128+0100 I COMMAND [conn11] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "rs", dryRun: false, term: 93, candidateIndex: 2, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313480000|2, t: 91 } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:63 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms 2016-12-09T20:58:26.129+0100 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 93 } 2016-12-09T20:58:26.129+0100 D COMMAND [conn11] command: replSetHeartbeat 2016-12-09T20:58:26.129+0100 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 93 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:58:26.662+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 103 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:36.662+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 93 } 2016-12-09T20:58:26.663+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 103 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:36.662+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 93 } 2016-12-09T20:58:26.663+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 103 on host localhost:31002 2016-12-09T20:58:26.664+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 103 out: Operation aborted. 2016-12-09T20:58:26.664+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:26.664+0100 I REPL [ReplicationExecutor] Member localhost:31002 is now in state PRIMARY 2016-12-09T20:58:26.664+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:31.664Z 2016-12-09T20:58:26.971+0100 I REPL [ReplicationExecutor] syncing from: localhost:31002 2016-12-09T20:58:26.971+0100 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2016-12-09T20:58:26.971+0100 D NETWORK [rsBackgroundSync] connected to server localhost:31002 (127.0.0.1) 2016-12-09T20:58:26.972+0100 D REPL [SyncSourceFeedback] resetting connection in sync source feedback 2016-12-09T20:58:26.972+0100 D STORAGE [rsBackgroundSync] WT begin_transaction 2016-12-09T20:58:26.972+0100 I REPL [SyncSourceFeedback] setting syncSourceFeedback to localhost:31002 2016-12-09T20:58:26.972+0100 D STORAGE [rsBackgroundSync] WT rollback_transaction 2016-12-09T20:58:26.972+0100 D REPL [rsBackgroundSync] setting appliedThrough to: (term: 91, timestamp: Dec 9 20:58:00:2)({ ts: Timestamp 1481313480000|2, t: 91 }) 2016-12-09T20:58:26.972+0100 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG 2016-12-09T20:58:26.972+0100 D QUERY [rsBackgroundSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:58:26.972+0100 D STORAGE [rsBackgroundSync] WT begin_transaction 2016-12-09T20:58:26.972+0100 D WRITE [rsBackgroundSync] update validate options -- updatedFields: Fields:[ begin,] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:58:26.973+0100 D STORAGE [rsBackgroundSync] WT commit_transaction 2016-12-09T20:58:26.973+0100 D STORAGE [rsBackgroundSync] WT begin_transaction 2016-12-09T20:58:26.973+0100 D NETWORK [SyncSourceFeedback] connected to server localhost:31002 (127.0.0.1) 2016-12-09T20:58:26.973+0100 D STORAGE [rsBackgroundSync] WT rollback_transaction 2016-12-09T20:58:26.973+0100 D REPL [rsBackgroundSync] scheduling fetcher to read remote oplog on localhost:31002 starting at filter: { ts: { $gte: Timestamp 1481313480000|2 } } 2016-12-09T20:58:26.973+0100 D EXECUTOR [rsBackgroundSync] Scheduling remote command request: RemoteCommand 105 -- target:localhost:31002 db:local expDate:2016-12-09T20:58:36.973+0100 cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1481313480000|2 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 93 } 2016-12-09T20:58:26.973+0100 D ASIO [rsBackgroundSync] startCommand: RemoteCommand 105 -- target:localhost:31002 db:local expDate:2016-12-09T20:58:36.973+0100 cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1481313480000|2 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 93 } 2016-12-09T20:58:26.973+0100 I ASIO [rsBackgroundSync] dropping unhealthy pooled connection to localhost:31002 2016-12-09T20:58:26.973+0100 I ASIO [rsBackgroundSync] after drop, pool was empty, going to spawn some connections 2016-12-09T20:58:26.973+0100 I ASIO [rsBackgroundSync] Failed to close stream: Socket is not connected 2016-12-09T20:58:26.973+0100 I ASIO [NetworkInterfaceASIO-BGSync-0] Connecting to localhost:31002 2016-12-09T20:58:26.973+0100 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313470000|2, t: 90 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:58:26.974+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 106 on host localhost:31002 2016-12-09T20:58:26.974+0100 I ASIO [NetworkInterfaceASIO-BGSync-0] Successfully connected to localhost:31002 2016-12-09T20:58:26.974+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Initiating asynchronous command: RemoteCommand 105 -- target:localhost:31002 db:local expDate:2016-12-09T20:58:36.973+0100 cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp 1481313480000|2 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 93 } 2016-12-09T20:58:26.975+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 105 on host localhost:31002 2016-12-09T20:58:26.975+0100 D EXECUTOR [NetworkInterfaceASIO-BGSync-0] Received remote response: RemoteResponse -- cmd:{ waitedMS: 0, cursor: { firstBatch: [ { ts: Timestamp 1481313480000|2, t: 91, h: 6987913962352822846, v: 2, op: "n", ns: "", o: { msg: "new primary" } }, { ts: Timestamp 1481313506000|2, t: 93, h: -7749509256312160427, v: 2, op: "n", ns: "", o: { msg: "new primary" } } ], id: 14083714849, ns: "local.oplog.rs" }, ok: 1.0 } 2016-12-09T20:58:26.975+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Failed to time operation 105 out: Operation aborted. 2016-12-09T20:58:26.975+0100 D EXECUTOR [rsBackgroundSync-0] Executing a task on behalf of pool rsBackgroundSync 2016-12-09T20:58:26.975+0100 D REPL [rsBackgroundSync-0] fetcher read 2 operations from remote oplog starting at ts: Timestamp 1481313480000|2 and ending at ts: Timestamp 1481313506000|2 2016-12-09T20:58:26.975+0100 D REPL [rsBackgroundSync-0] batch resetting _lastOpTimeFetched: (term: 93, timestamp: Dec 9 20:58:26:2) 2016-12-09T20:58:26.975+0100 D REPL [rsSync] replication batch size is 1 2016-12-09T20:58:26.975+0100 D STORAGE [rsSync] WT begin_transaction 2016-12-09T20:58:26.975+0100 D STORAGE [rsSync] WT commit_transaction 2016-12-09T20:58:26.975+0100 D REPL [rsSync] setting minvalid to at least: (term: 93, timestamp: Dec 9 20:58:26:2)({ ts: Timestamp 1481313506000|2, t: 93 }) 2016-12-09T20:58:26.975+0100 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:58:26.975+0100 D STORAGE [rsSync] WT begin_transaction 2016-12-09T20:58:26.975+0100 D WRITE [rsSync] update validate options -- updatedFields: Fields:[ t,ts,] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:58:26.975+0100 D STORAGE [rsSync] WT commit_transaction 2016-12-09T20:58:26.975+0100 D STORAGE [rsSync] WT begin_transaction 2016-12-09T20:58:26.975+0100 D STORAGE [rsSync] WT rollback_transaction 2016-12-09T20:58:26.975+0100 D EXECUTOR [repl writer worker 2] Executing a task on behalf of pool repl writer worker Pool 2016-12-09T20:58:26.976+0100 D EXECUTOR [repl writer worker 2] waiting for work; I am one of 16 thread(s); the minimum number of threads is 16 2016-12-09T20:58:26.976+0100 D REPL [rsSync] setting appliedThrough to: (term: 93, timestamp: Dec 9 20:58:26:2)({ ts: Timestamp 1481313506000|2, t: 93 }) 2016-12-09T20:58:26.976+0100 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:58:26.976+0100 D STORAGE [rsSync] WT begin_transaction 2016-12-09T20:58:26.976+0100 D WRITE [rsSync] update validate options -- updatedFields: Fields:[ begin,] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:58:26.976+0100 D STORAGE [rsSync] WT commit_transaction 2016-12-09T20:58:26.976+0100 D STORAGE [rsSync] WT begin_transaction 2016-12-09T20:58:26.976+0100 D STORAGE [rsSync] WT rollback_transaction 2016-12-09T20:58:26.976+0100 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313470000|2, t: 90 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:58:26.978+0100 D EXECUTOR [rsBackgroundSync-0] Scheduling remote command request: RemoteCommand 108 -- target:localhost:31002 db:local expDate:2016-12-09T20:58:36.978+0100 cmd:{ getMore: 14083714849, collection: "oplog.rs", maxTimeMS: 5000, term: 93, lastKnownCommittedOpTime: { ts: Timestamp 1481313480000|2, t: 91 } } 2016-12-09T20:58:26.978+0100 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 108 -- target:localhost:31002 db:local expDate:2016-12-09T20:58:36.978+0100 cmd:{ getMore: 14083714849, collection: "oplog.rs", maxTimeMS: 5000, term: 93, lastKnownCommittedOpTime: { ts: Timestamp 1481313480000|2, t: 91 } } 2016-12-09T20:58:26.978+0100 D EXECUTOR [rsBackgroundSync-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2016-12-09T20:58:26.978+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Initiating asynchronous command: RemoteCommand 108 -- target:localhost:31002 db:local expDate:2016-12-09T20:58:36.978+0100 cmd:{ getMore: 14083714849, collection: "oplog.rs", maxTimeMS: 5000, term: 93, lastKnownCommittedOpTime: { ts: Timestamp 1481313480000|2, t: 91 } } 2016-12-09T20:58:26.978+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 108 on host localhost:31002 2016-12-09T20:58:26.980+0100 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313470000|2, t: 90 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:58:26.981+0100 D EXECUTOR [NetworkInterfaceASIO-BGSync-0] Received remote response: RemoteResponse -- cmd:{ cursor: { nextBatch: [], id: 14083714849, ns: "local.oplog.rs" }, ok: 1.0 } 2016-12-09T20:58:26.981+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Failed to time operation 108 out: Operation aborted. 2016-12-09T20:58:26.981+0100 D EXECUTOR [rsBackgroundSync-0] Executing a task on behalf of pool rsBackgroundSync 2016-12-09T20:58:26.981+0100 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog 2016-12-09T20:58:26.981+0100 D EXECUTOR [rsBackgroundSync-0] Scheduling remote command request: RemoteCommand 110 -- target:localhost:31002 db:local expDate:2016-12-09T20:58:36.981+0100 cmd:{ getMore: 14083714849, collection: "oplog.rs", maxTimeMS: 5000, term: 93, lastKnownCommittedOpTime: { ts: Timestamp 1481313506000|2, t: 93 } } 2016-12-09T20:58:26.981+0100 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 110 -- target:localhost:31002 db:local expDate:2016-12-09T20:58:36.981+0100 cmd:{ getMore: 14083714849, collection: "oplog.rs", maxTimeMS: 5000, term: 93, lastKnownCommittedOpTime: { ts: Timestamp 1481313506000|2, t: 93 } } 2016-12-09T20:58:26.981+0100 D EXECUTOR [rsBackgroundSync-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2016-12-09T20:58:26.982+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Initiating asynchronous command: RemoteCommand 110 -- target:localhost:31002 db:local expDate:2016-12-09T20:58:36.981+0100 cmd:{ getMore: 14083714849, collection: "oplog.rs", maxTimeMS: 5000, term: 93, lastKnownCommittedOpTime: { ts: Timestamp 1481313506000|2, t: 93 } } 2016-12-09T20:58:26.982+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 110 on host localhost:31002 2016-12-09T20:58:27.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:27.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:27.630+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 111 -- target:localhost:12345 db:admin expDate:2016-12-09T20:58:37.630+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 93 } 2016-12-09T20:58:28.003+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:28.004+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:28.131+0100 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 93 } 2016-12-09T20:58:28.131+0100 D COMMAND [conn11] command: replSetHeartbeat 2016-12-09T20:58:28.131+0100 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 93 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:58:29.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:29.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:29.204+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:25.213+0100 2016-12-09T20:58:29.204+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:28.131+0100 2016-12-09T20:58:29.204+0100 D REPL [ReplicationExecutor] earliest member 0 date: 2016-12-09T20:58:25.213+0100 2016-12-09T20:58:29.204+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:58:35.213+0100 2016-12-09T20:58:30.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:30.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:30.137+0100 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 93 } 2016-12-09T20:58:30.137+0100 D COMMAND [conn11] command: replSetHeartbeat 2016-12-09T20:58:30.137+0100 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 93 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:58:30.219+0100 D COMMAND [conn10] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 92 } 2016-12-09T20:58:30.219+0100 D COMMAND [conn10] command: replSetHeartbeat 2016-12-09T20:58:30.219+0100 I COMMAND [conn10] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 92 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:58:30.324+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:59017 #12 (3 connections now open) 2016-12-09T20:58:30.324+0100 D COMMAND [conn12] run command admin.$cmd { isMaster: 1 } 2016-12-09T20:58:30.325+0100 I COMMAND [conn12] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:356 locks:{} protocol:op_query 0ms 2016-12-09T20:58:30.325+0100 D QUERY [conn12] Running query: query: {} sort: {} projection: {} ntoreturn=1 2016-12-09T20:58:30.325+0100 D QUERY [conn12] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} ntoreturn=1, planSummary: COLLSCAN 2016-12-09T20:58:30.325+0100 D STORAGE [conn12] WT begin_transaction 2016-12-09T20:58:30.325+0100 D STORAGE [conn12] WT rollback_transaction 2016-12-09T20:58:30.325+0100 I COMMAND [conn12] query local.oplog.rs planSummary: COLLSCAN ntoreturn:1 ntoskip:0 keysExamined:0 docsExamined:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:106 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms 2016-12-09T20:58:30.325+0100 D QUERY [conn12] Running query: query: { ts: { $gte: Timestamp 1481313506000|2, $lte: Timestamp 1481313506000|2 } } sort: {} projection: {} 2016-12-09T20:58:30.325+0100 D STORAGE [conn12] WT begin_transaction 2016-12-09T20:58:30.325+0100 D QUERY [conn12] Using direct oplog seek 2016-12-09T20:58:30.325+0100 D WRITE [conn12] Caught WriteConflictException doing plan execution on local.oplog.rs, attempt: 1 retrying 2016-12-09T20:58:30.325+0100 D STORAGE [conn12] WT rollback_transaction 2016-12-09T20:58:30.325+0100 D STORAGE [conn12] WT begin_transaction 2016-12-09T20:58:30.325+0100 D STORAGE [conn12] WT rollback_transaction 2016-12-09T20:58:30.325+0100 I COMMAND [conn12] query local.oplog.rs query: { ts: { $gte: Timestamp 1481313506000|2, $lte: Timestamp 1481313506000|2 } } planSummary: COLLSCAN cursorid:15133952040 ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:1 keyUpdates:0 writeConflicts:1 numYields:1 nreturned:1 reslen:114 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } 0ms 2016-12-09T20:58:30.326+0100 D COMMAND [conn12] killcursors: found 1 of 1 2016-12-09T20:58:30.326+0100 I COMMAND [conn12] killcursors local.oplog.rs keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms 2016-12-09T20:58:30.326+0100 D NETWORK [conn12] Socket recv() conn closed? 127.0.0.1:59017 2016-12-09T20:58:30.326+0100 D NETWORK [conn12] SocketException: remote: 127.0.0.1:59017 error: 9001 socket exception [CLOSED] server [127.0.0.1:59017] 2016-12-09T20:58:30.326+0100 I NETWORK [conn12] end connection 127.0.0.1:59017 (2 connections now open) 2016-12-09T20:58:30.326+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:59018 #13 (3 connections now open) 2016-12-09T20:58:30.326+0100 D COMMAND [conn13] run command admin.$cmd { isMaster: 1 } 2016-12-09T20:58:30.326+0100 I COMMAND [conn13] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:356 locks:{} protocol:op_query 0ms 2016-12-09T20:58:30.327+0100 D COMMAND [conn13] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 1, cfgver: 7 } ] } 2016-12-09T20:58:30.327+0100 D COMMAND [conn13] command: replSetUpdatePosition 2016-12-09T20:58:30.327+0100 D REPL [conn13] received notification that node with memberID 0 in config with version 7 has reached optime: (term: 91, timestamp: Dec 9 20:58:00:2) and is durable through: (term: 91, timestamp: Dec 9 20:58:00:2) 2016-12-09T20:58:30.327+0100 D REPL [conn13] Node with memberID 0 currently has optime (term: 91, timestamp: Dec 9 20:58:00:2) durable through (term: 90, timestamp: Dec 9 20:57:50:2); updating to optime (term: 91, timestamp: Dec 9 20:58:00:2) and durable through (term: 91, timestamp: Dec 9 20:58:00:2) 2016-12-09T20:58:30.327+0100 I COMMAND [conn13] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 1, cfgver: 7 } ] } keyUpdates:0 writeConflicts:0 numYields:0 reslen:22 locks:{} protocol:op_command 0ms 2016-12-09T20:58:30.327+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:30.327+0100 2016-12-09T20:58:30.327+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:30.137+0100 2016-12-09T20:58:30.327+0100 D REPL [ReplicationExecutor] earliest member 2 date: 2016-12-09T20:58:30.137+0100 2016-12-09T20:58:30.327+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:58:40.137+0100 2016-12-09T20:58:30.327+0100 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:58:30.328+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:59020 #14 (4 connections now open) 2016-12-09T20:58:30.328+0100 D COMMAND [conn14] run command admin.$cmd { isMaster: 1 } 2016-12-09T20:58:30.328+0100 I COMMAND [conn14] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:356 locks:{} protocol:op_query 0ms 2016-12-09T20:58:30.328+0100 D COMMAND [conn14] run command local.$cmd { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1481313480000|2 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 93 } 2016-12-09T20:58:30.328+0100 D STORAGE [conn14] WT begin_transaction 2016-12-09T20:58:30.328+0100 D QUERY [conn14] Using direct oplog seek 2016-12-09T20:58:30.328+0100 D WRITE [conn14] Caught WriteConflictException doing plan execution on local.oplog.rs, attempt: 1 retrying 2016-12-09T20:58:30.328+0100 D STORAGE [conn14] WT rollback_transaction 2016-12-09T20:58:30.328+0100 D STORAGE [conn14] WT begin_transaction 2016-12-09T20:58:30.328+0100 D STORAGE [conn14] WT rollback_transaction 2016-12-09T20:58:30.329+0100 I COMMAND [conn14] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1481313480000|2 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 93 } planSummary: COLLSCAN cursorid:13296294109 keysExamined:0 docsExamined:2 keyUpdates:0 writeConflicts:1 numYields:1 nreturned:2 reslen:505 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms 2016-12-09T20:58:30.329+0100 D COMMAND [conn13] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 1, cfgver: 7 } ] } 2016-12-09T20:58:30.330+0100 D COMMAND [conn13] command: replSetUpdatePosition 2016-12-09T20:58:30.330+0100 D REPL [conn13] received notification that node with memberID 0 in config with version 7 has reached optime: (term: 93, timestamp: Dec 9 20:58:26:2) and is durable through: (term: 91, timestamp: Dec 9 20:58:00:2) 2016-12-09T20:58:30.330+0100 D REPL [conn13] Node with memberID 0 currently has optime (term: 91, timestamp: Dec 9 20:58:00:2) durable through (term: 91, timestamp: Dec 9 20:58:00:2); updating to optime (term: 93, timestamp: Dec 9 20:58:26:2) and durable through (term: 91, timestamp: Dec 9 20:58:00:2) 2016-12-09T20:58:30.330+0100 I COMMAND [conn13] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 1, cfgver: 7 } ] } keyUpdates:0 writeConflicts:0 numYields:0 reslen:22 locks:{} protocol:op_command 0ms 2016-12-09T20:58:30.330+0100 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:58:30.332+0100 D COMMAND [conn14] run command local.$cmd { killCursors: "oplog.rs", cursors: [ 13296294109 ] } 2016-12-09T20:58:30.332+0100 I COMMAND [conn14] command local.oplog.rs command: killCursors { killCursors: "oplog.rs", cursors: [ 13296294109 ] } keyUpdates:0 writeConflicts:0 numYields:0 reslen:115 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms 2016-12-09T20:58:30.332+0100 D COMMAND [conn10] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 93 } 2016-12-09T20:58:30.333+0100 D COMMAND [conn10] command: replSetHeartbeat 2016-12-09T20:58:30.333+0100 I COMMAND [conn10] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 93 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:58:30.334+0100 D NETWORK [conn13] Socket recv() conn closed? 127.0.0.1:59018 2016-12-09T20:58:30.334+0100 D NETWORK [conn13] SocketException: remote: 127.0.0.1:59018 error: 9001 socket exception [CLOSED] server [127.0.0.1:59018] 2016-12-09T20:58:30.334+0100 I NETWORK [conn13] end connection 127.0.0.1:59018 (3 connections now open) 2016-12-09T20:58:31.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:31.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:31.665+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 112 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:41.665+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 93 } 2016-12-09T20:58:31.665+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 112 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:41.665+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 93 } 2016-12-09T20:58:31.665+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 112 on host localhost:31002 2016-12-09T20:58:31.666+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 112 out: Operation aborted. 2016-12-09T20:58:31.666+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:31.666+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:33.666Z 2016-12-09T20:58:31.986+0100 D EXECUTOR [NetworkInterfaceASIO-BGSync-0] Received remote response: RemoteResponse -- cmd:{ cursor: { nextBatch: [], id: 14083714849, ns: "local.oplog.rs" }, ok: 1.0 } 2016-12-09T20:58:31.986+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Failed to time operation 110 out: Operation aborted. 2016-12-09T20:58:31.986+0100 D EXECUTOR [rsBackgroundSync-0] Executing a task on behalf of pool rsBackgroundSync 2016-12-09T20:58:31.986+0100 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog 2016-12-09T20:58:31.986+0100 D EXECUTOR [rsBackgroundSync-0] Scheduling remote command request: RemoteCommand 115 -- target:localhost:31002 db:local expDate:2016-12-09T20:58:41.986+0100 cmd:{ getMore: 14083714849, collection: "oplog.rs", maxTimeMS: 5000, term: 93, lastKnownCommittedOpTime: { ts: Timestamp 1481313506000|2, t: 93 } } 2016-12-09T20:58:31.987+0100 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 115 -- target:localhost:31002 db:local expDate:2016-12-09T20:58:41.986+0100 cmd:{ getMore: 14083714849, collection: "oplog.rs", maxTimeMS: 5000, term: 93, lastKnownCommittedOpTime: { ts: Timestamp 1481313506000|2, t: 93 } } 2016-12-09T20:58:31.987+0100 D EXECUTOR [rsBackgroundSync-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2016-12-09T20:58:31.987+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Initiating asynchronous command: RemoteCommand 115 -- target:localhost:31002 db:local expDate:2016-12-09T20:58:41.986+0100 cmd:{ getMore: 14083714849, collection: "oplog.rs", maxTimeMS: 5000, term: 93, lastKnownCommittedOpTime: { ts: Timestamp 1481313506000|2, t: 93 } } 2016-12-09T20:58:31.987+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 115 on host localhost:31002 2016-12-09T20:58:32.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:32.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:32.140+0100 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 93 } 2016-12-09T20:58:32.140+0100 D COMMAND [conn11] command: replSetHeartbeat 2016-12-09T20:58:32.140+0100 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 93 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:58:33.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:33.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:33.670+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 116 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:43.670+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 93 } 2016-12-09T20:58:33.670+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 116 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:43.670+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 93 } 2016-12-09T20:58:33.670+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 116 on host localhost:31002 2016-12-09T20:58:33.671+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 116 out: Operation aborted. 2016-12-09T20:58:33.671+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:33.671+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:35.671Z 2016-12-09T20:58:34.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:34.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:34.145+0100 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 93 } 2016-12-09T20:58:34.146+0100 D COMMAND [conn11] command: replSetHeartbeat 2016-12-09T20:58:34.146+0100 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 93 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:58:35.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:35.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:35.335+0100 D REPL [SyncSourceFeedback] Sending slave oplog progress to upstream updater: { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, appliedOpTime: { ts: Timestamp 1481313480000|2, t: 91 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:58:35.338+0100 D COMMAND [conn10] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 93 } 2016-12-09T20:58:35.338+0100 D COMMAND [conn10] command: replSetHeartbeat 2016-12-09T20:58:35.338+0100 I COMMAND [conn10] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 93 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:58:35.674+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 118 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:45.674+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 93 } 2016-12-09T20:58:35.674+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 118 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:45.674+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 93 } 2016-12-09T20:58:35.674+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 118 on host localhost:31002 2016-12-09T20:58:35.675+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 118 out: Operation aborted. 2016-12-09T20:58:35.675+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:35.675+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:37.675Z 2016-12-09T20:58:36.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:36.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:36.150+0100 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 93 } 2016-12-09T20:58:36.151+0100 D COMMAND [conn11] command: replSetHeartbeat 2016-12-09T20:58:36.151+0100 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 93 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:58:36.988+0100 D EXECUTOR [NetworkInterfaceASIO-BGSync-0] Received remote response: RemoteResponse -- cmd:{ cursor: { nextBatch: [], id: 14083714849, ns: "local.oplog.rs" }, ok: 1.0 } 2016-12-09T20:58:36.988+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Failed to time operation 115 out: Operation aborted. 2016-12-09T20:58:36.988+0100 D EXECUTOR [rsBackgroundSync-0] Executing a task on behalf of pool rsBackgroundSync 2016-12-09T20:58:36.988+0100 D REPL [rsBackgroundSync-0] fetcher read 0 operations from remote oplog 2016-12-09T20:58:36.988+0100 D EXECUTOR [rsBackgroundSync-0] Scheduling remote command request: RemoteCommand 121 -- target:localhost:31002 db:local expDate:2016-12-09T20:58:46.988+0100 cmd:{ getMore: 14083714849, collection: "oplog.rs", maxTimeMS: 5000, term: 93, lastKnownCommittedOpTime: { ts: Timestamp 1481313506000|2, t: 93 } } 2016-12-09T20:58:36.988+0100 D ASIO [rsBackgroundSync-0] startCommand: RemoteCommand 121 -- target:localhost:31002 db:local expDate:2016-12-09T20:58:46.988+0100 cmd:{ getMore: 14083714849, collection: "oplog.rs", maxTimeMS: 5000, term: 93, lastKnownCommittedOpTime: { ts: Timestamp 1481313506000|2, t: 93 } } 2016-12-09T20:58:36.988+0100 D EXECUTOR [rsBackgroundSync-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2016-12-09T20:58:36.988+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Initiating asynchronous command: RemoteCommand 121 -- target:localhost:31002 db:local expDate:2016-12-09T20:58:46.988+0100 cmd:{ getMore: 14083714849, collection: "oplog.rs", maxTimeMS: 5000, term: 93, lastKnownCommittedOpTime: { ts: Timestamp 1481313506000|2, t: 93 } } 2016-12-09T20:58:36.988+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Starting asynchronous command 121 on host localhost:31002 2016-12-09T20:58:37.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:37.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:37.635+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to get connection from pool for request 111: ExceededTimeLimit: Couldn't get a connection within the time limit 2016-12-09T20:58:37.636+0100 I REPL [ReplicationExecutor] Error in heartbeat request to localhost:12345; ExceededTimeLimit: Couldn't get a connection within the time limit 2016-12-09T20:58:37.636+0100 D REPL [ReplicationExecutor] setDownValues: heartbeat response failed for member _id:0, msg: Couldn't get a connection within the time limit 2016-12-09T20:58:37.636+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:12345 at 2016-12-09T19:58:39.636Z 2016-12-09T20:58:37.677+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 122 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:47.677+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 93 } 2016-12-09T20:58:37.678+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 122 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:47.677+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 93 } 2016-12-09T20:58:37.678+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 122 on host localhost:31002 2016-12-09T20:58:37.678+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 122 out: Operation aborted. 2016-12-09T20:58:37.678+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:37.678+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:39.678Z 2016-12-09T20:58:38.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:38.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:38.155+0100 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 93 } 2016-12-09T20:58:38.155+0100 D COMMAND [conn11] command: replSetHeartbeat 2016-12-09T20:58:38.155+0100 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 93 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:58:39.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:39.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:39.639+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 124 -- target:localhost:12345 db:admin expDate:2016-12-09T20:58:49.639+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 93 } 2016-12-09T20:58:39.680+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 125 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:49.680+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 93 } 2016-12-09T20:58:39.681+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 125 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:49.680+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 93 } 2016-12-09T20:58:39.681+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 125 on host localhost:31002 2016-12-09T20:58:39.681+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 125 out: Operation aborted. 2016-12-09T20:58:39.681+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:39.681+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:41.681Z 2016-12-09T20:58:40.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:40.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:40.139+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:35.338+0100 2016-12-09T20:58:40.139+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:38.155+0100 2016-12-09T20:58:40.140+0100 D REPL [ReplicationExecutor] earliest member 0 date: 2016-12-09T20:58:35.338+0100 2016-12-09T20:58:40.140+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:58:45.338+0100 2016-12-09T20:58:40.158+0100 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 93 } 2016-12-09T20:58:40.158+0100 D COMMAND [conn11] command: replSetHeartbeat 2016-12-09T20:58:40.158+0100 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 93 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:413 locks:{} protocol:op_command 0ms 2016-12-09T20:58:40.220+0100 D COMMAND [conn10] run command admin.$cmd { replSetRequestVotes: 1, setName: "rs", dryRun: true, term: 93, candidateIndex: 0, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313506000|2, t: 93 } } 2016-12-09T20:58:40.220+0100 D COMMAND [conn10] command: replSetRequestVotes 2016-12-09T20:58:40.220+0100 D QUERY [conn10] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:58:40.220+0100 D STORAGE [conn10] WT begin_transaction 2016-12-09T20:58:40.220+0100 D WRITE [conn10] update validate options -- updatedFields: Fields:[ ] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:58:40.220+0100 D STORAGE [conn10] WT commit_transaction 2016-12-09T20:58:40.220+0100 D STORAGE [conn10] WT begin_transaction 2016-12-09T20:58:40.220+0100 D STORAGE [conn10] WT rollback_transaction 2016-12-09T20:58:40.220+0100 I COMMAND [conn10] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "rs", dryRun: true, term: 93, candidateIndex: 0, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313506000|2, t: 93 } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:63 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms 2016-12-09T20:58:40.221+0100 D COMMAND [conn10] run command admin.$cmd { replSetRequestVotes: 1, setName: "rs", dryRun: false, term: 94, candidateIndex: 0, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313506000|2, t: 93 } } 2016-12-09T20:58:40.222+0100 D COMMAND [conn10] command: replSetRequestVotes 2016-12-09T20:58:40.222+0100 D QUERY [conn10] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:58:40.222+0100 D STORAGE [conn10] WT begin_transaction 2016-12-09T20:58:40.222+0100 D WRITE [conn10] update validate options -- updatedFields: Fields:[ ] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:58:40.222+0100 D STORAGE [conn10] WT commit_transaction 2016-12-09T20:58:40.222+0100 D STORAGE [conn10] WT begin_transaction 2016-12-09T20:58:40.222+0100 D STORAGE [conn10] WT rollback_transaction 2016-12-09T20:58:40.222+0100 I COMMAND [conn10] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "rs", dryRun: false, term: 94, candidateIndex: 0, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313506000|2, t: 93 } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:63 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 0ms 2016-12-09T20:58:40.222+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Failed to execute command: RemoteCommand 121 -- target:localhost:31002 db:local expDate:2016-12-09T20:58:46.988+0100 cmd:{ getMore: 14083714849, collection: "oplog.rs", maxTimeMS: 5000, term: 93, lastKnownCommittedOpTime: { ts: Timestamp 1481313506000|2, t: 93 } } reason: HostUnreachable: End of file 2016-12-09T20:58:40.222+0100 D EXECUTOR [NetworkInterfaceASIO-BGSync-0] Received remote response: HostUnreachable: End of file 2016-12-09T20:58:40.222+0100 D EXECUTOR [rsBackgroundSync-0] Executing a task on behalf of pool rsBackgroundSync 2016-12-09T20:58:40.222+0100 I ASIO [NetworkInterfaceASIO-BGSync-0] Failed to close stream: Socket is not connected 2016-12-09T20:58:40.222+0100 D EXECUTOR [rsBackgroundSync-0] waiting for work; I am one of 1 thread(s); the minimum number of threads is 1 2016-12-09T20:58:40.222+0100 D REPL [rsBackgroundSync] fetcher stopped reading remote oplog on localhost:31002 2016-12-09T20:58:40.222+0100 D ASIO [NetworkInterfaceASIO-BGSync-0] Failed to time operation 121 out: Operation aborted. 2016-12-09T20:58:40.222+0100 I REPL [ReplicationExecutor] could not find member to sync from 2016-12-09T20:58:40.222+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:12345 at 2016-12-09T19:58:40.222Z 2016-12-09T20:58:40.222+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:40.222Z 2016-12-09T20:58:40.223+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:40.223+0100 2016-12-09T20:58:40.223+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:40.223+0100 2016-12-09T20:58:40.223+0100 D REPL [ReplicationExecutor] earliest member 0 date: 2016-12-09T20:58:40.223+0100 2016-12-09T20:58:40.223+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:58:50.223+0100 2016-12-09T20:58:40.223+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 128 -- target:localhost:12345 db:admin expDate:2016-12-09T20:58:49.639+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 94 } 2016-12-09T20:58:40.223+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 130 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:50.223+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 94 } 2016-12-09T20:58:40.223+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to localhost:12345 2016-12-09T20:58:40.223+0100 I ASIO [ReplicationExecutor] dropping unhealthy pooled connection to localhost:31002 2016-12-09T20:58:40.223+0100 I ASIO [ReplicationExecutor] after drop, pool was empty, going to spawn some connections 2016-12-09T20:58:40.223+0100 I ASIO [ReplicationExecutor] Failed to close stream: Socket is not connected 2016-12-09T20:58:40.223+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to localhost:31002 2016-12-09T20:58:40.223+0100 D COMMAND [conn10] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 94 } 2016-12-09T20:58:40.223+0100 D COMMAND [conn10] command: replSetHeartbeat 2016-12-09T20:58:40.223+0100 I COMMAND [conn10] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 94 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:382 locks:{} protocol:op_command 0ms 2016-12-09T20:58:40.225+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 129 on host localhost:12345 2016-12-09T20:58:40.226+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 131 on host localhost:31002 2016-12-09T20:58:40.226+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Successfully connected to localhost:31002 2016-12-09T20:58:40.226+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 130 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:50.223+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 94 } 2016-12-09T20:58:40.227+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 130 on host localhost:31002 2016-12-09T20:58:40.227+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 130 out: Operation aborted. 2016-12-09T20:58:40.227+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:40.227+0100 I REPL [ReplicationExecutor] Member localhost:31002 is now in state SECONDARY 2016-12-09T20:58:40.227+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:45.227Z 2016-12-09T20:58:40.340+0100 D REPL [SyncSourceFeedback] resetting connection in sync source feedback 2016-12-09T20:58:40.623+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Failed to connect to localhost:12345 - ExceededTimeLimit: Operation timed out 2016-12-09T20:58:40.623+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to execute command: RemoteCommand 100 -- target:localhost:12345 db:admin cmd:{ isMaster: 1 } reason: ExceededTimeLimit: Operation timed out 2016-12-09T20:58:40.623+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to get connection from pool for request 124: ExceededTimeLimit: Operation timed out 2016-12-09T20:58:40.623+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to get connection from pool for request 128: ExceededTimeLimit: Operation timed out 2016-12-09T20:58:40.623+0100 I REPL [ReplicationExecutor] Error in heartbeat request to localhost:12345; ExceededTimeLimit: Operation timed out 2016-12-09T20:58:40.623+0100 D REPL [ReplicationExecutor] Bad heartbeat response from localhost:12345; trying again; Retries left: 1; 984ms have already elapsed 2016-12-09T20:58:40.623+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:12345 at 2016-12-09T19:58:40.623Z 2016-12-09T20:58:40.623+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 133 -- target:localhost:12345 db:admin expDate:2016-12-09T20:58:49.639+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 94 } 2016-12-09T20:58:40.623+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to localhost:12345 2016-12-09T20:58:40.625+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 134 on host localhost:12345 2016-12-09T20:58:41.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:41.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:42.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:42.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:42.165+0100 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 93 } 2016-12-09T20:58:42.165+0100 D COMMAND [conn11] command: replSetHeartbeat 2016-12-09T20:58:42.165+0100 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 93 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:58:42.228+0100 D COMMAND [conn10] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 94 } 2016-12-09T20:58:42.228+0100 D COMMAND [conn10] command: replSetHeartbeat 2016-12-09T20:58:42.228+0100 I COMMAND [conn10] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 94 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:58:43.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:43.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:44.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:44.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:44.233+0100 D COMMAND [conn10] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 94 } 2016-12-09T20:58:44.234+0100 D COMMAND [conn10] command: replSetHeartbeat 2016-12-09T20:58:44.234+0100 I COMMAND [conn10] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 94 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:58:45.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:45.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:45.230+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 135 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:55.230+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 94 } 2016-12-09T20:58:45.230+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 135 -- target:localhost:31002 db:admin expDate:2016-12-09T20:58:55.230+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 94 } 2016-12-09T20:58:45.230+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 135 on host localhost:31002 2016-12-09T20:58:45.231+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 135 out: Operation aborted. 2016-12-09T20:58:45.231+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:45.231+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:50.231Z 2016-12-09T20:58:46.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:46.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:46.236+0100 D COMMAND [conn10] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 94 } 2016-12-09T20:58:46.236+0100 D COMMAND [conn10] command: replSetHeartbeat 2016-12-09T20:58:46.236+0100 I COMMAND [conn10] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 94 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:58:47.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:47.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:47.169+0100 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 94 } 2016-12-09T20:58:47.169+0100 D COMMAND [conn11] command: replSetHeartbeat 2016-12-09T20:58:47.169+0100 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 94 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:58:48.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:48.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:48.237+0100 D COMMAND [conn10] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 94 } 2016-12-09T20:58:48.237+0100 D COMMAND [conn10] command: replSetHeartbeat 2016-12-09T20:58:48.237+0100 I COMMAND [conn10] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 94 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:58:49.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:49.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:49.644+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to get connection from pool for request 133: ExceededTimeLimit: Couldn't get a connection within the time limit 2016-12-09T20:58:49.644+0100 I REPL [ReplicationExecutor] Error in heartbeat request to localhost:12345; ExceededTimeLimit: Couldn't get a connection within the time limit 2016-12-09T20:58:49.644+0100 D REPL [ReplicationExecutor] setDownValues: heartbeat response failed for member _id:0, msg: Couldn't get a connection within the time limit 2016-12-09T20:58:49.644+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:12345 at 2016-12-09T19:58:54.644Z 2016-12-09T20:58:50.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:50.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:50.227+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:48.237+0100 2016-12-09T20:58:50.227+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:47.169+0100 2016-12-09T20:58:50.227+0100 D REPL [ReplicationExecutor] earliest member 2 date: 2016-12-09T20:58:47.169+0100 2016-12-09T20:58:50.227+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:58:57.169+0100 2016-12-09T20:58:50.232+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 137 -- target:localhost:31002 db:admin expDate:2016-12-09T20:59:00.232+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 94 } 2016-12-09T20:58:50.233+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 137 -- target:localhost:31002 db:admin expDate:2016-12-09T20:59:00.232+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 94 } 2016-12-09T20:58:50.233+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 137 on host localhost:31002 2016-12-09T20:58:50.233+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 137 out: Operation aborted. 2016-12-09T20:58:50.233+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:50.233+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:55.233Z 2016-12-09T20:58:50.240+0100 D COMMAND [conn10] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 94 } 2016-12-09T20:58:50.240+0100 D COMMAND [conn10] command: replSetHeartbeat 2016-12-09T20:58:50.241+0100 I COMMAND [conn10] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 94 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{} protocol:op_command 0ms 2016-12-09T20:58:51.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:51.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:51.554+0100 I REPL [ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms 2016-12-09T20:58:51.554+0100 I REPL [ReplicationExecutor] conducting a dry run election to see if we could be elected 2016-12-09T20:58:51.554+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 139 -- target:localhost:12345 db:admin expDate:2016-12-09T20:59:01.554+0100 cmd:{ replSetRequestVotes: 1, setName: "rs", dryRun: true, term: 94, candidateIndex: 1, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313506000|2, t: 93 } } 2016-12-09T20:58:51.554+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 140 -- target:localhost:31002 db:admin expDate:2016-12-09T20:59:01.554+0100 cmd:{ replSetRequestVotes: 1, setName: "rs", dryRun: true, term: 94, candidateIndex: 1, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313506000|2, t: 93 } } 2016-12-09T20:58:51.554+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 140 -- target:localhost:31002 db:admin expDate:2016-12-09T20:59:01.554+0100 cmd:{ replSetRequestVotes: 1, setName: "rs", dryRun: true, term: 94, candidateIndex: 1, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313506000|2, t: 93 } } 2016-12-09T20:58:51.554+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 140 on host localhost:31002 2016-12-09T20:58:51.555+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 140 out: Operation aborted. 2016-12-09T20:58:51.555+0100 D REPL [ReplicationExecutor] VoteRequester: Got yes vote from localhost:31002, resp:{ term: 94, voteGranted: true, reason: "", ok: 1.0 } 2016-12-09T20:58:51.555+0100 I REPL [ReplicationExecutor] dry election run succeeded, running for election 2016-12-09T20:58:51.555+0100 D EXECUTOR [replExecDBWorker-2] Executing a task on behalf of pool replExecDBWorker-Pool 2016-12-09T20:58:51.555+0100 D QUERY [replExecDBWorker-2] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:58:51.556+0100 D STORAGE [replExecDBWorker-2] WT begin_transaction 2016-12-09T20:58:51.556+0100 D WRITE [replExecDBWorker-2] update validate options -- updatedFields: Fields:[ ] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:58:51.556+0100 D STORAGE [replExecDBWorker-2] WT commit_transaction 2016-12-09T20:58:51.556+0100 D STORAGE [replExecDBWorker-2] WT begin_transaction 2016-12-09T20:58:51.556+0100 D STORAGE [replExecDBWorker-2] WT rollback_transaction 2016-12-09T20:58:51.556+0100 D EXECUTOR [replExecDBWorker-2] waiting for work; I am one of 3 thread(s); the minimum number of threads is 3 2016-12-09T20:58:51.556+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 142 -- target:localhost:12345 db:admin expDate:2016-12-09T20:59:01.556+0100 cmd:{ replSetRequestVotes: 1, setName: "rs", dryRun: false, term: 95, candidateIndex: 1, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313506000|2, t: 93 } } 2016-12-09T20:58:51.556+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 143 -- target:localhost:31002 db:admin expDate:2016-12-09T20:59:01.556+0100 cmd:{ replSetRequestVotes: 1, setName: "rs", dryRun: false, term: 95, candidateIndex: 1, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313506000|2, t: 93 } } 2016-12-09T20:58:51.556+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to localhost:12345 2016-12-09T20:58:51.556+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 143 -- target:localhost:31002 db:admin expDate:2016-12-09T20:59:01.556+0100 cmd:{ replSetRequestVotes: 1, setName: "rs", dryRun: false, term: 95, candidateIndex: 1, configVersion: 7, lastCommittedOp: { ts: Timestamp 1481313506000|2, t: 93 } } 2016-12-09T20:58:51.556+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 143 on host localhost:31002 2016-12-09T20:58:51.557+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 143 out: Operation aborted. 2016-12-09T20:58:51.557+0100 D REPL [ReplicationExecutor] VoteRequester: Got yes vote from localhost:31002, resp:{ term: 95, voteGranted: true, reason: "", ok: 1.0 } 2016-12-09T20:58:51.557+0100 I REPL [ReplicationExecutor] election succeeded, assuming primary role in term 95 2016-12-09T20:58:51.557+0100 I REPL [ReplicationExecutor] transition to PRIMARY 2016-12-09T20:58:51.557+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:12345 at 2016-12-09T19:58:51.557Z 2016-12-09T20:58:51.557+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:51.557Z 2016-12-09T20:58:51.557+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:51.557+0100 2016-12-09T20:58:51.557+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:51.557+0100 2016-12-09T20:58:51.557+0100 D REPL [ReplicationExecutor] earliest member 0 date: 2016-12-09T20:58:51.557+0100 2016-12-09T20:58:51.557+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:59:01.557+0100 2016-12-09T20:58:51.557+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 146 -- target:localhost:12345 db:admin expDate:2016-12-09T20:59:01.557+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 95 } 2016-12-09T20:58:51.557+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 148 -- target:localhost:31002 db:admin expDate:2016-12-09T20:59:01.557+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 95 } 2016-12-09T20:58:51.557+0100 I ASIO [NetworkInterfaceASIO-Replication-0] Connecting to localhost:12345 2016-12-09T20:58:51.557+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 148 -- target:localhost:31002 db:admin expDate:2016-12-09T20:59:01.557+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 95 } 2016-12-09T20:58:51.557+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 148 on host localhost:31002 2016-12-09T20:58:51.558+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 144 on host localhost:12345 2016-12-09T20:58:51.558+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 148 out: Operation aborted. 2016-12-09T20:58:51.558+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:51.558+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:53.558Z 2016-12-09T20:58:51.559+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 147 on host localhost:12345 2016-12-09T20:58:52.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:52.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:52.173+0100 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 95 } 2016-12-09T20:58:52.173+0100 D COMMAND [conn11] command: replSetHeartbeat 2016-12-09T20:58:52.173+0100 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 95 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:404 locks:{} protocol:op_command 0ms 2016-12-09T20:58:52.264+0100 D STORAGE [rsSync] WT begin_transaction 2016-12-09T20:58:52.264+0100 D REPL [rsSync] returning oplog delete from point: 0:0 2016-12-09T20:58:52.264+0100 D REPL [rsSync] setting appliedThrough to: (term: -1, timestamp: Jan 1 01:00:00:0)({ ts: Timestamp 0|0, t: -1 }) 2016-12-09T20:58:52.265+0100 D QUERY [rsSync] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {}, planSummary: COLLSCAN 2016-12-09T20:58:52.265+0100 D WRITE [rsSync] update validate options -- updatedFields: Fields:[ begin,] immutableAndSingleValueFields.size:0 validate:1 2016-12-09T20:58:52.265+0100 D STORAGE [rsSync] WT commit_transaction 2016-12-09T20:58:52.265+0100 D STORAGE [rsSync] WT begin_transaction 2016-12-09T20:58:52.265+0100 D STORAGE [rsSync] WT commit_transaction 2016-12-09T20:58:52.265+0100 D STORAGE [rsSync] WT begin_transaction 2016-12-09T20:58:52.265+0100 D REPL [rsSync] returning initial sync flag value of 0 2016-12-09T20:58:52.265+0100 D REPL [rsSync] Removing temporary collections from app 2016-12-09T20:58:52.265+0100 D STORAGE [rsSync] looking up metadata for: app.test @ RecordId(7) 2016-12-09T20:58:52.265+0100 D STORAGE [rsSync] fetched CCE metadata: { md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] }, idxIdent: { _id_: "index-12--295440694794046494" }, ns: "app.test", ident: "collection-11--295440694794046494" } 2016-12-09T20:58:52.265+0100 D STORAGE [rsSync] returning metadata: md: { ns: "app.test", options: {}, indexes: [ { spec: { v: 1, key: { _id: 1 }, name: "_id_", ns: "app.test" }, ready: true, multikey: false, head: 0 } ] } 2016-12-09T20:58:52.265+0100 I REPL [rsSync] transition to primary complete; database writes are now permitted 2016-12-09T20:58:52.265+0100 D STORAGE [rsSync] WT rollback_transaction 2016-12-09T20:58:53.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:53.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:53.561+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 150 -- target:localhost:31002 db:admin expDate:2016-12-09T20:59:03.561+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 95 } 2016-12-09T20:58:53.561+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 150 -- target:localhost:31002 db:admin expDate:2016-12-09T20:59:03.561+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 95 } 2016-12-09T20:58:53.561+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 150 on host localhost:31002 2016-12-09T20:58:53.562+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 150 out: Operation aborted. 2016-12-09T20:58:53.562+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:53.562+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:55.562Z 2016-12-09T20:58:54.005+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:54.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:55.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:55.005+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:55.244+0100 D COMMAND [conn10] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 94 } 2016-12-09T20:58:55.244+0100 D COMMAND [conn10] command: replSetHeartbeat 2016-12-09T20:58:55.244+0100 I COMMAND [conn10] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 94 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:404 locks:{} protocol:op_command 0ms 2016-12-09T20:58:55.405+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:59049 #15 (4 connections now open) 2016-12-09T20:58:55.405+0100 D COMMAND [conn15] run command admin.$cmd { isMaster: 1 } 2016-12-09T20:58:55.405+0100 I COMMAND [conn15] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2016-12-09T20:58:55.405+0100 D QUERY [conn15] Running query: query: {} sort: {} projection: {} ntoreturn=1 2016-12-09T20:58:55.405+0100 D QUERY [conn15] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} ntoreturn=1, planSummary: COLLSCAN 2016-12-09T20:58:55.405+0100 D STORAGE [conn15] WT begin_transaction 2016-12-09T20:58:55.405+0100 D STORAGE [conn15] WT rollback_transaction 2016-12-09T20:58:55.405+0100 I COMMAND [conn15] query local.oplog.rs planSummary: COLLSCAN ntoreturn:1 ntoskip:0 keysExamined:0 docsExamined:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:106 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms 2016-12-09T20:58:55.405+0100 D NETWORK [conn15] Socket recv() conn closed? 127.0.0.1:59049 2016-12-09T20:58:55.405+0100 D NETWORK [conn15] SocketException: remote: 127.0.0.1:59049 error: 9001 socket exception [CLOSED] server [127.0.0.1:59049] 2016-12-09T20:58:55.406+0100 I NETWORK [conn15] end connection 127.0.0.1:59049 (3 connections now open) 2016-12-09T20:58:55.406+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:59050 #16 (4 connections now open) 2016-12-09T20:58:55.406+0100 D COMMAND [conn16] run command admin.$cmd { isMaster: 1 } 2016-12-09T20:58:55.406+0100 I COMMAND [conn16] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2016-12-09T20:58:55.406+0100 D COMMAND [conn14] run command local.$cmd { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1481313520000|2 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 95 } 2016-12-09T20:58:55.406+0100 D STORAGE [conn14] WT begin_transaction 2016-12-09T20:58:55.406+0100 D QUERY [conn14] Using direct oplog seek 2016-12-09T20:58:55.407+0100 D COMMAND [conn16] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313520000|2, t: 94 }, appliedOpTime: { ts: Timestamp 1481313520000|2, t: 94 }, memberId: 0, cfgver: 7 } ] } 2016-12-09T20:58:55.407+0100 D WRITE [conn14] Caught WriteConflictException doing plan execution on local.oplog.rs, attempt: 1 retrying 2016-12-09T20:58:55.407+0100 D COMMAND [conn16] command: replSetUpdatePosition 2016-12-09T20:58:55.407+0100 D STORAGE [conn14] WT rollback_transaction 2016-12-09T20:58:55.407+0100 D REPL [conn16] received notification that node with memberID 0 in config with version 7 has reached optime: (term: 94, timestamp: Dec 9 20:58:40:2) and is durable through: (term: 94, timestamp: Dec 9 20:58:40:2) 2016-12-09T20:58:55.407+0100 D STORAGE [conn14] WT begin_transaction 2016-12-09T20:58:55.407+0100 D REPL [conn16] Node with memberID 0 currently has optime (term: 93, timestamp: Dec 9 20:58:26:2) durable through (term: 91, timestamp: Dec 9 20:58:00:2); updating to optime (term: 94, timestamp: Dec 9 20:58:40:2) and durable through (term: 94, timestamp: Dec 9 20:58:40:2) 2016-12-09T20:58:55.407+0100 D STORAGE [conn14] WT rollback_transaction 2016-12-09T20:58:55.407+0100 I COMMAND [conn16] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313520000|2, t: 94 }, appliedOpTime: { ts: Timestamp 1481313520000|2, t: 94 }, memberId: 0, cfgver: 7 } ] } keyUpdates:0 writeConflicts:0 numYields:0 reslen:22 locks:{} protocol:op_command 0ms 2016-12-09T20:58:55.407+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:55.407+0100 2016-12-09T20:58:55.407+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:52.173+0100 2016-12-09T20:58:55.407+0100 D REPL [ReplicationExecutor] earliest member 2 date: 2016-12-09T20:58:52.173+0100 2016-12-09T20:58:55.407+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:59:02.173+0100 2016-12-09T20:58:55.407+0100 I COMMAND [conn14] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1481313520000|2 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 95 } planSummary: COLLSCAN cursorid:16431994781 keysExamined:0 docsExamined:2 keyUpdates:0 writeConflicts:1 numYields:1 nreturned:1 reslen:408 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms 2016-12-09T20:58:55.408+0100 D COMMAND [conn14] run command local.$cmd { killCursors: "oplog.rs", cursors: [ 16431994781 ] } 2016-12-09T20:58:55.408+0100 I COMMAND [conn14] command local.oplog.rs command: killCursors { killCursors: "oplog.rs", cursors: [ 16431994781 ] } keyUpdates:0 writeConflicts:0 numYields:0 reslen:115 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms 2016-12-09T20:58:55.408+0100 D NETWORK [conn16] Socket recv() conn closed? 127.0.0.1:59050 2016-12-09T20:58:55.408+0100 D NETWORK [conn16] SocketException: remote: 127.0.0.1:59050 error: 9001 socket exception [CLOSED] server [127.0.0.1:59050] 2016-12-09T20:58:55.408+0100 I NETWORK [conn16] end connection 127.0.0.1:59050 (3 connections now open) 2016-12-09T20:58:55.409+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:59051 #17 (4 connections now open) 2016-12-09T20:58:55.409+0100 D COMMAND [conn17] run command admin.$cmd { isMaster: 1 } 2016-12-09T20:58:55.409+0100 I COMMAND [conn17] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2016-12-09T20:58:55.409+0100 D COMMAND [conn17] run command admin.$cmd { replSetGetRBID: 1 } 2016-12-09T20:58:55.409+0100 D COMMAND [conn17] command: replSetGetRBID 2016-12-09T20:58:55.409+0100 I COMMAND [conn17] command admin.$cmd command: replSetGetRBID { replSetGetRBID: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:32 locks:{} protocol:op_command 0ms 2016-12-09T20:58:55.409+0100 D QUERY [conn17] Running query: query: {} sort: { $natural: -1 } projection: { ts: 1, h: 1 } 2016-12-09T20:58:55.409+0100 D QUERY [conn17] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: -1 } projection: { ts: 1, h: 1 }, planSummary: COLLSCAN 2016-12-09T20:58:55.409+0100 D STORAGE [conn17] WT begin_transaction 2016-12-09T20:58:55.409+0100 D STORAGE [conn17] WT rollback_transaction 2016-12-09T20:58:55.409+0100 I COMMAND [conn17] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN cursorid:15517094024 ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:101 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:101 reslen:2848 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms 2016-12-09T20:58:55.410+0100 D COMMAND [conn17] killcursors: found 1 of 1 2016-12-09T20:58:55.410+0100 I COMMAND [conn17] killcursors local.oplog.rs keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms 2016-12-09T20:58:55.410+0100 D QUERY [conn17] Running query: query: {} sort: { $natural: -1 } projection: {} ntoreturn=1 2016-12-09T20:58:55.410+0100 D QUERY [conn17] Only one plan is available; it will be run but will not be cached. query: {} sort: { $natural: -1 } projection: {} ntoreturn=1, planSummary: COLLSCAN 2016-12-09T20:58:55.410+0100 D STORAGE [conn17] WT begin_transaction 2016-12-09T20:58:55.410+0100 D STORAGE [conn17] WT rollback_transaction 2016-12-09T20:58:55.410+0100 I COMMAND [conn17] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 keysExamined:0 docsExamined:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:114 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms 2016-12-09T20:58:55.410+0100 D COMMAND [conn17] run command admin.$cmd { replSetGetRBID: 1 } 2016-12-09T20:58:55.410+0100 D COMMAND [conn17] command: replSetGetRBID 2016-12-09T20:58:55.410+0100 I COMMAND [conn17] command admin.$cmd command: replSetGetRBID { replSetGetRBID: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:32 locks:{} protocol:op_command 0ms 2016-12-09T20:58:55.424+0100 D NETWORK [conn17] Socket recv() conn closed? 127.0.0.1:59051 2016-12-09T20:58:55.424+0100 D NETWORK [conn17] SocketException: remote: 127.0.0.1:59051 error: 9001 socket exception [CLOSED] server [127.0.0.1:59051] 2016-12-09T20:58:55.424+0100 I NETWORK [conn17] end connection 127.0.0.1:59051 (3 connections now open) 2016-12-09T20:58:55.425+0100 D COMMAND [conn10] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 95 } 2016-12-09T20:58:55.425+0100 D COMMAND [conn10] command: replSetHeartbeat 2016-12-09T20:58:55.425+0100 I COMMAND [conn10] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 95 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:404 locks:{} protocol:op_command 0ms 2016-12-09T20:58:55.567+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 152 -- target:localhost:31002 db:admin expDate:2016-12-09T20:59:05.567+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 95 } 2016-12-09T20:58:55.567+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 152 -- target:localhost:31002 db:admin expDate:2016-12-09T20:59:05.567+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 95 } 2016-12-09T20:58:55.567+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 152 on host localhost:31002 2016-12-09T20:58:55.568+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 152 out: Operation aborted. 2016-12-09T20:58:55.568+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:55.568+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:57.568Z 2016-12-09T20:58:56.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:56.004+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:56.428+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:59052 #18 (4 connections now open) 2016-12-09T20:58:56.429+0100 D COMMAND [conn18] run command admin.$cmd { isMaster: 1 } 2016-12-09T20:58:56.429+0100 I COMMAND [conn18] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2016-12-09T20:58:56.429+0100 D QUERY [conn18] Running query: query: {} sort: {} projection: {} ntoreturn=1 2016-12-09T20:58:56.429+0100 D QUERY [conn18] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} ntoreturn=1, planSummary: COLLSCAN 2016-12-09T20:58:56.429+0100 D STORAGE [conn18] WT begin_transaction 2016-12-09T20:58:56.429+0100 D STORAGE [conn18] WT rollback_transaction 2016-12-09T20:58:56.429+0100 I COMMAND [conn18] query local.oplog.rs planSummary: COLLSCAN ntoreturn:1 ntoskip:0 keysExamined:0 docsExamined:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:106 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms 2016-12-09T20:58:56.429+0100 D QUERY [conn18] Running query: query: { ts: { $gte: Timestamp 1481313532000|1, $lte: Timestamp 1481313532000|1 } } sort: {} projection: {} 2016-12-09T20:58:56.429+0100 D STORAGE [conn18] WT begin_transaction 2016-12-09T20:58:56.429+0100 D QUERY [conn18] Using direct oplog seek 2016-12-09T20:58:56.429+0100 D WRITE [conn18] Caught WriteConflictException doing plan execution on local.oplog.rs, attempt: 1 retrying 2016-12-09T20:58:56.429+0100 D STORAGE [conn18] WT rollback_transaction 2016-12-09T20:58:56.429+0100 D STORAGE [conn18] WT begin_transaction 2016-12-09T20:58:56.429+0100 D STORAGE [conn18] WT rollback_transaction 2016-12-09T20:58:56.429+0100 I COMMAND [conn18] query local.oplog.rs query: { ts: { $gte: Timestamp 1481313532000|1, $lte: Timestamp 1481313532000|1 } } planSummary: COLLSCAN cursorid:14835304372 ntoreturn:0 ntoskip:0 keysExamined:0 docsExamined:1 keyUpdates:0 writeConflicts:1 numYields:1 nreturned:1 reslen:114 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } 0ms 2016-12-09T20:58:56.429+0100 D COMMAND [conn18] killcursors: found 1 of 1 2016-12-09T20:58:56.429+0100 I COMMAND [conn18] killcursors local.oplog.rs keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms 2016-12-09T20:58:56.429+0100 D NETWORK [conn18] Socket recv() conn closed? 127.0.0.1:59052 2016-12-09T20:58:56.429+0100 D NETWORK [conn18] SocketException: remote: 127.0.0.1:59052 error: 9001 socket exception [CLOSED] server [127.0.0.1:59052] 2016-12-09T20:58:56.429+0100 I NETWORK [conn18] end connection 127.0.0.1:59052 (3 connections now open) 2016-12-09T20:58:56.430+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:59053 #19 (4 connections now open) 2016-12-09T20:58:56.430+0100 D COMMAND [conn19] run command admin.$cmd { isMaster: 1 } 2016-12-09T20:58:56.430+0100 I COMMAND [conn19] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2016-12-09T20:58:56.430+0100 D COMMAND [conn14] run command local.$cmd { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1481313506000|2 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 95 } 2016-12-09T20:58:56.430+0100 D STORAGE [conn14] WT begin_transaction 2016-12-09T20:58:56.430+0100 D QUERY [conn14] Using direct oplog seek 2016-12-09T20:58:56.430+0100 D COMMAND [conn19] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:58:56.430+0100 D WRITE [conn14] Caught WriteConflictException doing plan execution on local.oplog.rs, attempt: 1 retrying 2016-12-09T20:58:56.430+0100 D COMMAND [conn19] command: replSetUpdatePosition 2016-12-09T20:58:56.430+0100 D STORAGE [conn14] WT rollback_transaction 2016-12-09T20:58:56.430+0100 D STORAGE [conn14] WT begin_transaction 2016-12-09T20:58:56.430+0100 D REPL [conn19] received notification that node with memberID 0 in config with version 7 has reached optime: (term: 93, timestamp: Dec 9 20:58:26:2) and is durable through: (term: 93, timestamp: Dec 9 20:58:26:2) 2016-12-09T20:58:56.430+0100 D STORAGE [conn14] WT rollback_transaction 2016-12-09T20:58:56.430+0100 D REPL [conn19] Node with memberID 0 currently has optime (term: 94, timestamp: Dec 9 20:58:40:2) durable through (term: 94, timestamp: Dec 9 20:58:40:2); updating to optime (term: 93, timestamp: Dec 9 20:58:26:2) and durable through (term: 93, timestamp: Dec 9 20:58:26:2) 2016-12-09T20:58:56.430+0100 D REPL [conn19] received notification that node with memberID 2 in config with version 7 has reached optime: (term: 93, timestamp: Dec 9 20:58:26:2) and is durable through: (term: 93, timestamp: Dec 9 20:58:26:2) 2016-12-09T20:58:56.430+0100 D REPL [conn19] Node with memberID 2 currently has optime (term: 93, timestamp: Dec 9 20:58:26:2) durable through (term: 93, timestamp: Dec 9 20:58:26:2); updating to optime (term: 93, timestamp: Dec 9 20:58:26:2) and durable through (term: 93, timestamp: Dec 9 20:58:26:2) 2016-12-09T20:58:56.430+0100 I COMMAND [conn19] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 2, cfgver: 7 } ] } keyUpdates:0 writeConflicts:0 numYields:0 reslen:22 locks:{} protocol:op_command 0ms 2016-12-09T20:58:56.430+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:56.430+0100 2016-12-09T20:58:56.430+0100 I COMMAND [conn14] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1481313506000|2 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 95 } planSummary: COLLSCAN cursorid:14062533201 keysExamined:0 docsExamined:2 keyUpdates:0 writeConflicts:1 numYields:1 nreturned:2 reslen:505 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms 2016-12-09T20:58:56.430+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:56.430+0100 2016-12-09T20:58:56.430+0100 D REPL [ReplicationExecutor] earliest member 0 date: 2016-12-09T20:58:56.430+0100 2016-12-09T20:58:56.430+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:59:06.430+0100 2016-12-09T20:58:56.431+0100 D COMMAND [conn19] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313532000|1, t: 95 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:58:56.431+0100 D COMMAND [conn19] command: replSetUpdatePosition 2016-12-09T20:58:56.431+0100 D REPL [conn19] received notification that node with memberID 0 in config with version 7 has reached optime: (term: 95, timestamp: Dec 9 20:58:52:1) and is durable through: (term: 93, timestamp: Dec 9 20:58:26:2) 2016-12-09T20:58:56.431+0100 D REPL [conn19] Node with memberID 0 currently has optime (term: 94, timestamp: Dec 9 20:58:40:2) durable through (term: 94, timestamp: Dec 9 20:58:40:2); updating to optime (term: 95, timestamp: Dec 9 20:58:52:1) and durable through (term: 93, timestamp: Dec 9 20:58:26:2) 2016-12-09T20:58:56.431+0100 D REPL [conn19] received notification that node with memberID 2 in config with version 7 has reached optime: (term: 93, timestamp: Dec 9 20:58:26:2) and is durable through: (term: 93, timestamp: Dec 9 20:58:26:2) 2016-12-09T20:58:56.431+0100 D REPL [conn19] Node with memberID 2 currently has optime (term: 93, timestamp: Dec 9 20:58:26:2) durable through (term: 93, timestamp: Dec 9 20:58:26:2); updating to optime (term: 93, timestamp: Dec 9 20:58:26:2) and durable through (term: 93, timestamp: Dec 9 20:58:26:2) 2016-12-09T20:58:56.431+0100 I COMMAND [conn19] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313532000|1, t: 95 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 2, cfgver: 7 } ] } keyUpdates:0 writeConflicts:0 numYields:0 reslen:22 locks:{} protocol:op_command 0ms 2016-12-09T20:58:56.431+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:56.431+0100 2016-12-09T20:58:56.431+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:56.431+0100 2016-12-09T20:58:56.431+0100 D REPL [ReplicationExecutor] earliest member 0 date: 2016-12-09T20:58:56.431+0100 2016-12-09T20:58:56.431+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:59:06.431+0100 2016-12-09T20:58:56.433+0100 D COMMAND [conn14] run command local.$cmd { killCursors: "oplog.rs", cursors: [ 14062533201 ] } 2016-12-09T20:58:56.433+0100 I COMMAND [conn14] command local.oplog.rs command: killCursors { killCursors: "oplog.rs", cursors: [ 14062533201 ] } keyUpdates:0 writeConflicts:0 numYields:0 reslen:115 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_command 0ms 2016-12-09T20:58:56.434+0100 D COMMAND [conn10] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 95 } 2016-12-09T20:58:56.434+0100 D COMMAND [conn10] command: replSetHeartbeat 2016-12-09T20:58:56.434+0100 I COMMAND [conn10] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:12345", fromId: 0, term: 95 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:404 locks:{} protocol:op_command 0ms 2016-12-09T20:58:56.436+0100 D NETWORK [conn19] Socket recv() conn closed? 127.0.0.1:59053 2016-12-09T20:58:56.436+0100 D NETWORK [conn19] SocketException: remote: 127.0.0.1:59053 error: 9001 socket exception [CLOSED] server [127.0.0.1:59053] 2016-12-09T20:58:56.436+0100 I NETWORK [conn19] end connection 127.0.0.1:59053 (3 connections now open) 2016-12-09T20:58:57.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:57.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:57.178+0100 D COMMAND [conn11] run command admin.$cmd { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 95 } 2016-12-09T20:58:57.178+0100 D COMMAND [conn11] command: replSetHeartbeat 2016-12-09T20:58:57.179+0100 I COMMAND [conn11] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31002", fromId: 2, term: 95 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:404 locks:{} protocol:op_command 0ms 2016-12-09T20:58:57.279+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:59054 #20 (4 connections now open) 2016-12-09T20:58:57.279+0100 D COMMAND [conn20] run command admin.$cmd { isMaster: 1 } 2016-12-09T20:58:57.279+0100 I COMMAND [conn20] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2016-12-09T20:58:57.279+0100 D QUERY [conn20] Running query: query: {} sort: {} projection: {} ntoreturn=1 2016-12-09T20:58:57.279+0100 D QUERY [conn20] Only one plan is available; it will be run but will not be cached. query: {} sort: {} projection: {} ntoreturn=1, planSummary: COLLSCAN 2016-12-09T20:58:57.279+0100 D STORAGE [conn20] WT begin_transaction 2016-12-09T20:58:57.279+0100 D STORAGE [conn20] WT rollback_transaction 2016-12-09T20:58:57.279+0100 I COMMAND [conn20] query local.oplog.rs planSummary: COLLSCAN ntoreturn:1 ntoskip:0 keysExamined:0 docsExamined:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:106 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms 2016-12-09T20:58:57.279+0100 D NETWORK [conn20] Socket recv() conn closed? 127.0.0.1:59054 2016-12-09T20:58:57.279+0100 D NETWORK [conn20] SocketException: remote: 127.0.0.1:59054 error: 9001 socket exception [CLOSED] server [127.0.0.1:59054] 2016-12-09T20:58:57.279+0100 I NETWORK [conn20] end connection 127.0.0.1:59054 (3 connections now open) 2016-12-09T20:58:57.280+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:59055 #21 (4 connections now open) 2016-12-09T20:58:57.280+0100 D COMMAND [conn21] run command admin.$cmd { isMaster: 1 } 2016-12-09T20:58:57.280+0100 I COMMAND [conn21] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2016-12-09T20:58:57.281+0100 D COMMAND [conn21] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313495000|2, t: 92 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:58:57.281+0100 D COMMAND [conn21] command: replSetUpdatePosition 2016-12-09T20:58:57.281+0100 D REPL [conn21] received notification that node with memberID 0 in config with version 7 has reached optime: (term: 93, timestamp: Dec 9 20:58:26:2) and is durable through: (term: 92, timestamp: Dec 9 20:58:15:2) 2016-12-09T20:58:57.281+0100 D REPL [conn21] Node with memberID 0 currently has optime (term: 95, timestamp: Dec 9 20:58:52:1) durable through (term: 94, timestamp: Dec 9 20:58:40:2); updating to optime (term: 93, timestamp: Dec 9 20:58:26:2) and durable through (term: 92, timestamp: Dec 9 20:58:15:2) 2016-12-09T20:58:57.281+0100 D REPL [conn21] received notification that node with memberID 2 in config with version 7 has reached optime: (term: 93, timestamp: Dec 9 20:58:26:2) and is durable through: (term: 93, timestamp: Dec 9 20:58:26:2) 2016-12-09T20:58:57.281+0100 D REPL [conn21] Node with memberID 2 currently has optime (term: 93, timestamp: Dec 9 20:58:26:2) durable through (term: 93, timestamp: Dec 9 20:58:26:2); updating to optime (term: 93, timestamp: Dec 9 20:58:26:2) and durable through (term: 93, timestamp: Dec 9 20:58:26:2) 2016-12-09T20:58:57.281+0100 I COMMAND [conn21] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313495000|2, t: 92 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 2, cfgver: 7 } ] } keyUpdates:0 writeConflicts:0 numYields:0 reslen:22 locks:{} protocol:op_command 0ms 2016-12-09T20:58:57.281+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:57.281+0100 2016-12-09T20:58:57.281+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:57.281+0100 2016-12-09T20:58:57.281+0100 D REPL [ReplicationExecutor] earliest member 0 date: 2016-12-09T20:58:57.281+0100 2016-12-09T20:58:57.281+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:59:07.281+0100 2016-12-09T20:58:57.282+0100 I NETWORK [initandlisten] connection accepted from 127.0.0.1:59057 #22 (5 connections now open) 2016-12-09T20:58:57.282+0100 D COMMAND [conn22] run command admin.$cmd { isMaster: 1 } 2016-12-09T20:58:57.282+0100 I COMMAND [conn22] command admin.$cmd command: isMaster { isMaster: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:380 locks:{} protocol:op_query 0ms 2016-12-09T20:58:57.282+0100 D COMMAND [conn22] run command local.$cmd { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1481313506000|2 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 95 } 2016-12-09T20:58:57.282+0100 D STORAGE [conn22] WT begin_transaction 2016-12-09T20:58:57.282+0100 D QUERY [conn22] Using direct oplog seek 2016-12-09T20:58:57.282+0100 D WRITE [conn22] Caught WriteConflictException doing plan execution on local.oplog.rs, attempt: 1 retrying 2016-12-09T20:58:57.282+0100 D STORAGE [conn22] WT rollback_transaction 2016-12-09T20:58:57.283+0100 D STORAGE [conn22] WT begin_transaction 2016-12-09T20:58:57.283+0100 D STORAGE [conn22] WT rollback_transaction 2016-12-09T20:58:57.283+0100 I COMMAND [conn22] command local.oplog.rs command: find { find: "oplog.rs", filter: { ts: { $gte: Timestamp 1481313506000|2 } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 60000, term: 95 } planSummary: COLLSCAN cursorid:17034141109 keysExamined:0 docsExamined:2 keyUpdates:0 writeConflicts:1 numYields:1 nreturned:2 reslen:505 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 0ms 2016-12-09T20:58:57.284+0100 D COMMAND [conn21] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313495000|2, t: 92 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313532000|1, t: 95 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:58:57.284+0100 D COMMAND [conn21] command: replSetUpdatePosition 2016-12-09T20:58:57.284+0100 D REPL [conn21] received notification that node with memberID 0 in config with version 7 has reached optime: (term: 93, timestamp: Dec 9 20:58:26:2) and is durable through: (term: 92, timestamp: Dec 9 20:58:15:2) 2016-12-09T20:58:57.284+0100 D REPL [conn21] Node with memberID 0 currently has optime (term: 95, timestamp: Dec 9 20:58:52:1) durable through (term: 94, timestamp: Dec 9 20:58:40:2); updating to optime (term: 93, timestamp: Dec 9 20:58:26:2) and durable through (term: 92, timestamp: Dec 9 20:58:15:2) 2016-12-09T20:58:57.284+0100 D REPL [conn21] received notification that node with memberID 2 in config with version 7 has reached optime: (term: 95, timestamp: Dec 9 20:58:52:1) and is durable through: (term: 93, timestamp: Dec 9 20:58:26:2) 2016-12-09T20:58:57.284+0100 D REPL [conn21] Node with memberID 2 currently has optime (term: 93, timestamp: Dec 9 20:58:26:2) durable through (term: 93, timestamp: Dec 9 20:58:26:2); updating to optime (term: 95, timestamp: Dec 9 20:58:52:1) and durable through (term: 93, timestamp: Dec 9 20:58:26:2) 2016-12-09T20:58:57.284+0100 I COMMAND [conn21] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313495000|2, t: 92 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313532000|1, t: 95 }, memberId: 2, cfgver: 7 } ] } keyUpdates:0 writeConflicts:0 numYields:0 reslen:22 locks:{} protocol:op_command 0ms 2016-12-09T20:58:57.284+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:57.284+0100 2016-12-09T20:58:57.284+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:57.284+0100 2016-12-09T20:58:57.284+0100 D REPL [ReplicationExecutor] earliest member 0 date: 2016-12-09T20:58:57.284+0100 2016-12-09T20:58:57.284+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:59:07.284+0100 2016-12-09T20:58:57.286+0100 D COMMAND [conn22] run command local.$cmd { getMore: 17034141109, collection: "oplog.rs", maxTimeMS: 5000, term: 95, lastKnownCommittedOpTime: { ts: Timestamp 1481313506000|2, t: 93 } } 2016-12-09T20:58:57.286+0100 D STORAGE [conn22] WT begin_transaction 2016-12-09T20:58:57.286+0100 D STORAGE [conn22] WT rollback_transaction 2016-12-09T20:58:57.288+0100 D COMMAND [conn21] run command admin.$cmd { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313495000|2, t: 92 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313532000|1, t: 95 }, appliedOpTime: { ts: Timestamp 1481313532000|1, t: 95 }, memberId: 2, cfgver: 7 } ] } 2016-12-09T20:58:57.288+0100 D COMMAND [conn21] command: replSetUpdatePosition 2016-12-09T20:58:57.288+0100 D REPL [conn21] received notification that node with memberID 0 in config with version 7 has reached optime: (term: 93, timestamp: Dec 9 20:58:26:2) and is durable through: (term: 92, timestamp: Dec 9 20:58:15:2) 2016-12-09T20:58:57.288+0100 D REPL [conn21] Node with memberID 0 currently has optime (term: 95, timestamp: Dec 9 20:58:52:1) durable through (term: 94, timestamp: Dec 9 20:58:40:2); updating to optime (term: 93, timestamp: Dec 9 20:58:26:2) and durable through (term: 92, timestamp: Dec 9 20:58:15:2) 2016-12-09T20:58:57.288+0100 D REPL [conn21] received notification that node with memberID 2 in config with version 7 has reached optime: (term: 95, timestamp: Dec 9 20:58:52:1) and is durable through: (term: 95, timestamp: Dec 9 20:58:52:1) 2016-12-09T20:58:57.288+0100 D REPL [conn21] Node with memberID 2 currently has optime (term: 95, timestamp: Dec 9 20:58:52:1) durable through (term: 93, timestamp: Dec 9 20:58:26:2); updating to optime (term: 95, timestamp: Dec 9 20:58:52:1) and durable through (term: 95, timestamp: Dec 9 20:58:52:1) 2016-12-09T20:58:57.289+0100 I COMMAND [conn21] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { durableOpTime: { ts: Timestamp 1481313495000|2, t: 92 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 0, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, appliedOpTime: { ts: Timestamp 1481313506000|2, t: 93 }, memberId: 1, cfgver: 7 }, { durableOpTime: { ts: Timestamp 1481313532000|1, t: 95 }, appliedOpTime: { ts: Timestamp 1481313532000|1, t: 95 }, memberId: 2, cfgver: 7 } ] } keyUpdates:0 writeConflicts:0 numYields:0 reslen:22 locks:{} protocol:op_command 0ms 2016-12-09T20:58:57.289+0100 D STORAGE [conn22] WT begin_transaction 2016-12-09T20:58:57.289+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:57.288+0100 2016-12-09T20:58:57.289+0100 D STORAGE [conn22] WT rollback_transaction 2016-12-09T20:58:57.289+0100 D REPL [ReplicationExecutor] slaveinfo lastupdate is: 2016-12-09T20:58:57.289+0100 2016-12-09T20:58:57.289+0100 D REPL [ReplicationExecutor] earliest member 0 date: 2016-12-09T20:58:57.288+0100 2016-12-09T20:58:57.289+0100 D REPL [ReplicationExecutor] scheduling next check at 2016-12-09T20:59:07.288+0100 2016-12-09T20:58:57.290+0100 I COMMAND [conn22] command local.oplog.rs command: getMore { getMore: 17034141109, collection: "oplog.rs", maxTimeMS: 5000, term: 95, lastKnownCommittedOpTime: { ts: Timestamp 1481313506000|2, t: 93 } } planSummary: COLLSCAN cursorid:17034141109 keysExamined:0 docsExamined:0 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:0 reslen:292 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, oplog: { acquireCount: { r: 2 } } } protocol:op_command 3ms 2016-12-09T20:58:57.290+0100 D COMMAND [conn22] run command local.$cmd { getMore: 17034141109, collection: "oplog.rs", maxTimeMS: 5000, term: 95, lastKnownCommittedOpTime: { ts: Timestamp 1481313532000|1, t: 95 } } 2016-12-09T20:58:57.290+0100 D STORAGE [conn22] WT begin_transaction 2016-12-09T20:58:57.290+0100 D STORAGE [conn22] WT rollback_transaction 2016-12-09T20:58:57.568+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 154 -- target:localhost:31002 db:admin expDate:2016-12-09T20:59:07.568+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 95 } 2016-12-09T20:58:57.568+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 154 -- target:localhost:31002 db:admin expDate:2016-12-09T20:59:07.568+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 95 } 2016-12-09T20:58:57.568+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 154 on host localhost:31002 2016-12-09T20:58:57.569+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 154 out: Operation aborted. 2016-12-09T20:58:57.569+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:57.569+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:58:59.569Z 2016-12-09T20:58:58.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:58.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:58.597+0100 D NETWORK [conn14] Socket recv() conn closed? 127.0.0.1:59020 2016-12-09T20:58:58.597+0100 D NETWORK [conn14] SocketException: remote: 127.0.0.1:59020 error: 9001 socket exception [CLOSED] server [127.0.0.1:59020] 2016-12-09T20:58:58.597+0100 I NETWORK [conn14] end connection 127.0.0.1:59020 (4 connections now open) 2016-12-09T20:58:58.598+0100 D NETWORK [conn10] Socket recv() conn closed? 127.0.0.1:58985 2016-12-09T20:58:58.598+0100 D NETWORK [conn10] SocketException: remote: 127.0.0.1:58985 error: 9001 socket exception [CLOSED] server [127.0.0.1:58985] 2016-12-09T20:58:58.598+0100 I NETWORK [conn10] end connection 127.0.0.1:58985 (3 connections now open) 2016-12-09T20:58:59.004+0100 D STORAGE [ftdc] WT begin_transaction 2016-12-09T20:58:59.006+0100 D STORAGE [ftdc] WT rollback_transaction 2016-12-09T20:58:59.573+0100 D ASIO [ReplicationExecutor] startCommand: RemoteCommand 156 -- target:localhost:31002 db:admin expDate:2016-12-09T20:59:09.573+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 95 } 2016-12-09T20:58:59.574+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Initiating asynchronous command: RemoteCommand 156 -- target:localhost:31002 db:admin expDate:2016-12-09T20:59:09.573+0100 cmd:{ replSetHeartbeat: "rs", configVersion: 7, from: "localhost:31001", fromId: 1, term: 95 } 2016-12-09T20:58:59.574+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Starting asynchronous command 156 on host localhost:31002 2016-12-09T20:58:59.574+0100 D ASIO [NetworkInterfaceASIO-Replication-0] Failed to time operation 156 out: Operation aborted. 2016-12-09T20:58:59.574+0100 D REPL [ReplicationExecutor] setUpValues: heartbeat response good for member _id:2, msg: 2016-12-09T20:58:59.574+0100 D REPL [ReplicationExecutor] Scheduling heartbeat to localhost:31002 at 2016-12-09T19:59:01.574Z 2016-12-09T20:58:59.882+0100 I CONTROL [signalProcessingThread] got signal 2 (Interrupt: 2), will terminate after current cmd ends 2016-12-09T20:58:59.882+0100 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture 2016-12-09T20:58:59.883+0100 D ASIO [ReplicationExecutor] NetworkInterfaceASIO shutdown successfully 2016-12-09T20:58:59.883+0100 I REPL [signalProcessingThread] Stopping replication applier threads 2016-12-09T20:59:00.284+0100 D EXECUTOR [rsBackgroundSync-0] shutting down thread in pool rsBackgroundSync 2016-12-09T20:59:00.284+0100 D ASIO [rsBackgroundSync] NetworkInterfaceASIO shutdown successfully 2016-12-09T20:59:01.284+0100 D EXECUTOR [repl prefetch worker 1] shutting down thread in pool repl prefetch worker Pool 2016-12-09T20:59:01.284+0100 D EXECUTOR [repl prefetch worker 2] shutting down thread in pool repl prefetch worker Pool 2016-12-09T20:59:01.284+0100 D EXECUTOR [repl prefetch worker 3] shutting down thread in pool repl prefetch worker Pool 2016-12-09T20:59:01.285+0100 D EXECUTOR [repl prefetch worker 6] shutting down thread in pool repl prefetch worker Pool 2016-12-09T20:59:01.285+0100 D EXECUTOR [repl prefetch worker 5] shutting down thread in pool repl prefetch worker Pool 2016-12-09T20:59:01.285+0100 D EXECUTOR [repl prefetch worker 4] shutting down thread in pool repl prefetch worker Pool 2016-12-09T20:59:01.285+0100 D EXECUTOR [repl prefetch worker 0] shutting down thread in pool repl prefetch worker Pool 2016-12-09T20:59:01.285+0100 D EXECUTOR [repl prefetch worker 8] shutting down thread in pool repl prefetch worker Pool 2016-12-09T20:59:01.285+0100 D EXECUTOR [repl prefetch worker 7] shutting down thread in pool repl prefetch worker Pool 2016-12-09T20:59:01.285+0100 D EXECUTOR [repl prefetch worker 9] shutting down thread in pool repl prefetch worker Pool 2016-12-09T20:59:01.285+0100 D EXECUTOR [repl prefetch worker 10] shutting down thread in pool repl prefetch worker Pool 2016-12-09T20:59:01.285+0100 D EXECUTOR [repl prefetch worker 11] shutting down thread in pool repl prefetch worker Pool 2016-12-09T20:59:01.285+0100 D EXECUTOR [repl prefetch worker 12] shutting down thread in pool repl prefetch worker Pool 2016-12-09T20:59:01.285+0100 D EXECUTOR [repl prefetch worker 13] shutting down thread in pool repl prefetch worker Pool 2016-12-09T20:59:01.285+0100 D EXECUTOR [repl prefetch worker 14] shutting down thread in pool repl prefetch worker Pool 2016-12-09T20:59:01.285+0100 D EXECUTOR [repl prefetch worker 15] shutting down thread in pool repl prefetch worker Pool 2016-12-09T20:59:01.286+0100 D EXECUTOR [repl writer worker 3] shutting down thread in pool repl writer worker Pool 2016-12-09T20:59:01.286+0100 D EXECUTOR [repl writer worker 4] shutting down thread in pool repl writer worker Pool 2016-12-09T20:59:01.286+0100 D EXECUTOR [repl writer worker 6] shutting down thread in pool repl writer worker Pool 2016-12-09T20:59:01.286+0100 D EXECUTOR [repl writer worker 7] shutting down thread in pool repl writer worker Pool 2016-12-09T20:59:01.286+0100 D EXECUTOR [repl writer worker 8] shutting down thread in pool repl writer worker Pool 2016-12-09T20:59:01.286+0100 D EXECUTOR [repl writer worker 9] shutting down thread in pool repl writer worker Pool 2016-12-09T20:59:01.286+0100 D EXECUTOR [repl writer worker 10] shutting down thread in pool repl writer worker Pool 2016-12-09T20:59:01.286+0100 D EXECUTOR [repl writer worker 5] shutting down thread in pool repl writer worker Pool 2016-12-09T20:59:01.286+0100 D EXECUTOR [repl writer worker 11] shutting down thread in pool repl writer worker Pool 2016-12-09T20:59:01.286+0100 D EXECUTOR [repl writer worker 13] shutting down thread in pool repl writer worker Pool 2016-12-09T20:59:01.286+0100 D EXECUTOR [repl writer worker 12] shutting down thread in pool repl writer worker Pool 2016-12-09T20:59:01.286+0100 D EXECUTOR [repl writer worker 14] shutting down thread in pool repl writer worker Pool 2016-12-09T20:59:01.286+0100 D EXECUTOR [repl writer worker 15] shutting down thread in pool repl writer worker Pool 2016-12-09T20:59:01.286+0100 D EXECUTOR [repl writer worker 1] shutting down thread in pool repl writer worker Pool 2016-12-09T20:59:01.286+0100 D EXECUTOR [repl writer worker 0] shutting down thread in pool repl writer worker Pool 2016-12-09T20:59:01.286+0100 D EXECUTOR [repl writer worker 2] shutting down thread in pool repl writer worker Pool 2016-12-09T20:59:01.287+0100 D STORAGE [signalProcessingThread] WT begin_transaction 2016-12-09T20:59:01.287+0100 D STORAGE [signalProcessingThread] WT rollback_transaction 2016-12-09T20:59:01.287+0100 D REPL [signalProcessingThread] returning oplog delete from point: 0:0 2016-12-09T20:59:01.287+0100 D STORAGE [signalProcessingThread] WT begin_transaction 2016-12-09T20:59:01.287+0100 D STORAGE [signalProcessingThread] WT rollback_transaction 2016-12-09T20:59:01.287+0100 D REPL [signalProcessingThread] returning initial sync flag value of 0 2016-12-09T20:59:01.287+0100 D STORAGE [signalProcessingThread] WT begin_transaction 2016-12-09T20:59:01.287+0100 D STORAGE [signalProcessingThread] WT rollback_transaction 2016-12-09T20:59:01.287+0100 D STORAGE [signalProcessingThread] WT begin_transaction 2016-12-09T20:59:01.287+0100 D STORAGE [signalProcessingThread] WT rollback_transaction 2016-12-09T20:59:01.287+0100 I CONTROL [signalProcessingThread] now exiting 2016-12-09T20:59:01.287+0100 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... 2016-12-09T20:59:01.287+0100 I NETWORK [signalProcessingThread] closing listening socket: 6 2016-12-09T20:59:01.287+0100 I NETWORK [signalProcessingThread] closing listening socket: 7 2016-12-09T20:59:01.287+0100 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-31001.sock 2016-12-09T20:59:01.287+0100 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... 2016-12-09T20:59:01.287+0100 I NETWORK [signalProcessingThread] shutdown: going to close sockets... 2016-12-09T20:59:01.288+0100 D NETWORK [thread1] Closing connection # 22 2016-12-09T20:59:01.288+0100 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: app.test 2016-12-09T20:59:01.288+0100 D NETWORK [thread1] Closing connection # 21 2016-12-09T20:59:01.288+0100 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.me 2016-12-09T20:59:01.288+0100 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.oplog.rs 2016-12-09T20:59:01.288+0100 D NETWORK [thread1] Closing connection # 11 2016-12-09T20:59:01.288+0100 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.replset.election 2016-12-09T20:59:01.288+0100 D NETWORK [conn21] Socket recv() errno:9 Bad file descriptor 127.0.0.1:59055 2016-12-09T20:59:01.288+0100 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.replset.minvalid 2016-12-09T20:59:01.288+0100 D NETWORK [conn11] Socket recv() conn closed? 127.0.0.1:58987 2016-12-09T20:59:01.288+0100 D NETWORK [conn21] SocketException: remote: 127.0.0.1:59055 error: 9001 socket exception [RECV_ERROR] server [127.0.0.1:59055] 2016-12-09T20:59:01.288+0100 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.startup_log 2016-12-09T20:59:01.288+0100 D NETWORK [conn11] SocketException: remote: 127.0.0.1:58987 error: 9001 socket exception [CLOSED] server [127.0.0.1:58987] 2016-12-09T20:59:01.288+0100 I NETWORK [conn21] end connection 127.0.0.1:59055 (2 connections now open) 2016-12-09T20:59:01.288+0100 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.system.replset 2016-12-09T20:59:01.288+0100 I NETWORK [conn11] end connection 127.0.0.1:58987 (2 connections now open) 2016-12-09T20:59:01.288+0100 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: _mdb_catalog 2016-12-09T20:59:01.288+0100 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down 2016-12-09T20:59:01.288+0100 D STORAGE [signalProcessingThread] WiredTigerSizeStorer::storeInto table:_mdb_catalog -> { numRecords: 7, dataSize: 2032 } 2016-12-09T20:59:01.288+0100 D STORAGE [signalProcessingThread] WiredTigerSizeStorer::storeInto table:collection-0--295440694794046494 -> { numRecords: 1, dataSize: 61 } 2016-12-09T20:59:01.288+0100 D STORAGE [signalProcessingThread] WiredTigerSizeStorer::storeInto table:collection-11--295440694794046494 -> { numRecords: 44443, dataSize: 1288847 } 2016-12-09T20:59:01.288+0100 D STORAGE [signalProcessingThread] WiredTigerSizeStorer::storeInto table:collection-2--295440694794046494 -> { numRecords: 9, dataSize: 14051 } 2016-12-09T20:59:01.288+0100 D STORAGE [signalProcessingThread] WiredTigerSizeStorer::storeInto table:collection-4--295440694794046494 -> { numRecords: 1, dataSize: 705 } 2016-12-09T20:59:01.288+0100 D STORAGE [signalProcessingThread] WiredTigerSizeStorer::storeInto table:collection-6--295440694794046494 -> { numRecords: 44520, dataSize: 4673827 } 2016-12-09T20:59:01.288+0100 D STORAGE [signalProcessingThread] WiredTigerSizeStorer::storeInto table:collection-7--295440694794046494 -> { numRecords: 1, dataSize: 75 } 2016-12-09T20:59:01.288+0100 D STORAGE [signalProcessingThread] WiredTigerSizeStorer::storeInto table:collection-9--295440694794046494 -> { numRecords: 1, dataSize: 60 } 2016-12-09T20:59:01.341+0100 D STORAGE [WTJournalFlusher] stopping WTJournalFlusher thread 2016-12-09T20:59:01.387+0100 I STORAGE [signalProcessingThread] shutdown: removing fs lock... 2016-12-09T20:59:01.388+0100 I CONTROL [signalProcessingThread] dbexit: rc: 0