-- Logs begin at Sat 2019-01-12 01:39:29 UTC. -- Jan 13 18:46:16 ivy mongos[27723]: 2019-01-13T18:46:16.323+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_west1, with CS sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017 Jan 13 18:46:16 ivy mongos[27723]: 2019-01-13T18:46:16.323+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017 Jan 13 18:46:16 ivy mongos[27723]: 2019-01-13T18:46:16.323+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_europe_west1, with CS sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017 Jan 13 18:46:16 ivy mongos[27723]: 2019-01-13T18:46:16.323+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017 Jan 13 18:46:16 ivy mongos[27723]: 2019-01-13T18:46:16.323+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_europe_west2, with CS sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017 Jan 13 18:46:16 ivy mongos[27723]: 2019-01-13T18:46:16.323+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017 Jan 13 18:46:16 ivy mongos[27723]: 2019-01-13T18:46:16.323+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_europe_west3, with CS sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017 Jan 13 18:46:16 ivy mongos[27723]: 2019-01-13T18:46:16.323+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017 Jan 13 18:46:16 ivy mongos[27723]: 2019-01-13T18:46:16.323+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_east1_2, with CS sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017 Jan 13 18:46:16 ivy mongos[27723]: 2019-01-13T18:46:16.323+0000 D SHARDING [shard registry reload] Adding shard config, with CS sessions_config/ira.node.gce-us-east1.admiral:27019,jasper.node.gce-us-west1.admiral:27019,kratos.node.gce-europe-west3.admiral:27019,leon.node.gce-us-east1.admiral:27019,mateo.node.gce-us-west1.admiral:27019,newton.node.gce-europe-west3.admiral:27019 Jan 13 18:46:17 ivy mongos[27723]: 2019-01-13T18:46:17.767+0000 D TRACKING [replSetDistLockPinger] Cmd: NotSet, TrackingId: 5c3b8779a1824195fadc68b8 Jan 13 18:46:17 ivy mongos[27723]: 2019-01-13T18:46:17.767+0000 D EXECUTOR [replSetDistLockPinger] Scheduling remote command request: RemoteCommand 14087 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T18:46:47.767+0000 cmd:{ findAndModify: "lockpings", query: { _id: "ivy:27018:1547393707:-6945163188777852108" }, update: { $set: { ping: new Date(1547405177767) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } Jan 13 18:46:17 ivy mongos[27723]: 2019-01-13T18:46:17.767+0000 D ASIO [replSetDistLockPinger] startCommand: RemoteCommand 14087 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T18:46:47.767+0000 cmd:{ findAndModify: "lockpings", query: { _id: "ivy:27018:1547393707:-6945163188777852108" }, update: { $set: { ping: new Date(1547405177767) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } Jan 13 18:46:17 ivy mongos[27723]: 2019-01-13T18:46:17.768+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 18:46:17 ivy mongos[27723]: 2019-01-13T18:46:17.768+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:17 ivy mongos[27723]: 2019-01-13T18:46:17.768+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:17 ivy mongos[27723]: 2019-01-13T18:46:17.768+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:18 ivy mongos[27723]: 2019-01-13T18:46:18.004+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 18:46:18 ivy mongos[27723]: 2019-01-13T18:46:18.005+0000 D ASIO [ShardRegistry] Request 14087 finished with response: { lastErrorObject: { n: 1, updatedExisting: true }, value: { _id: "ivy:27018:1547393707:-6945163188777852108", ping: new Date(1547405147561) }, ok: 1.0, operationTime: Timestamp(1547405177, 635), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547405177, 650), t: 1 }, lastOpVisible: { ts: Timestamp(1547405177, 650), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547405177, 635), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405177, 650), $clusterTime: { clusterTime: Timestamp(1547405177, 757), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:18 ivy mongos[27723]: 2019-01-13T18:46:18.005+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ lastErrorObject: { n: 1, updatedExisting: true }, value: { _id: "ivy:27018:1547393707:-6945163188777852108", ping: new Date(1547405147561) }, ok: 1.0, operationTime: Timestamp(1547405177, 635), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547405177, 650), t: 1 }, lastOpVisible: { ts: Timestamp(1547405177, 650), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547405177, 635), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405177, 650), $clusterTime: { clusterTime: Timestamp(1547405177, 757), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:18 ivy mongos[27723]: 2019-01-13T18:46:18.005+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.000+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_config Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.000+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.036+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.036+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host ira.node.gce-us-east1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: true, secondary: false, primary: "ira.node.gce-us-east1.admiral:27019", me: "ira.node.gce-us-east1.admiral:27019", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1547405178, 815), t: 1 }, lastWriteDate: new Date(1547405178000), majorityOpTime: { ts: Timestamp(1547405178, 449), t: 1 }, majorityWriteDate: new Date(1547405178000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405179016), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405178, 815), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405178, 449), $clusterTime: { clusterTime: Timestamp(1547405178, 815), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.036+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ira.node.gce-us-east1.admiral:27019 lastWriteDate to 2019-01-13T18:46:18.000+0000 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.036+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ira.node.gce-us-east1.admiral:27019 opTime to { ts: Timestamp(1547405178, 815), t: 1 } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.036+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.075+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.076+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host leon.node.gce-us-east1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "leon.node.gce-us-east1.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547405178, 815), t: 1 }, lastWriteDate: new Date(1547405178000), majorityOpTime: { ts: Timestamp(1547405178, 682), t: 1 }, majorityWriteDate: new Date(1547405178000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405179051), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405178, 815), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547405178, 682), $clusterTime: { clusterTime: Timestamp(1547405178, 815), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.076+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating leon.node.gce-us-east1.admiral:27019 lastWriteDate to 2019-01-13T18:46:18.000+0000 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.076+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating leon.node.gce-us-east1.admiral:27019 opTime to { ts: Timestamp(1547405178, 815), t: 1 } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.076+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.116+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.116+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host jasper.node.gce-us-west1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "jasper.node.gce-us-west1.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547405178, 814), t: 1 }, lastWriteDate: new Date(1547405178000), majorityOpTime: { ts: Timestamp(1547405178, 449), t: 1 }, majorityWriteDate: new Date(1547405178000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405179093), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405178, 814), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547405178, 449), $clusterTime: { clusterTime: Timestamp(1547405178, 815), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.116+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jasper.node.gce-us-west1.admiral:27019 lastWriteDate to 2019-01-13T18:46:18.000+0000 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.116+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jasper.node.gce-us-west1.admiral:27019 opTime to { ts: Timestamp(1547405178, 814), t: 1 } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.116+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.223+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.223+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host newton.node.gce-europe-west3.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "newton.node.gce-europe-west3.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547405178, 815), t: 1 }, lastWriteDate: new Date(1547405178000), majorityOpTime: { ts: Timestamp(1547405178, 682), t: 1 }, majorityWriteDate: new Date(1547405178000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405179165), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405178, 815), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547405178, 682), $clusterTime: { clusterTime: Timestamp(1547405179, 18), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.223+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating newton.node.gce-europe-west3.admiral:27019 lastWriteDate to 2019-01-13T18:46:18.000+0000 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.223+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating newton.node.gce-europe-west3.admiral:27019 opTime to { ts: Timestamp(1547405178, 815), t: 1 } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.223+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.262+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.262+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host mateo.node.gce-us-west1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "mateo.node.gce-us-west1.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547405178, 815), t: 1 }, lastWriteDate: new Date(1547405178000), majorityOpTime: { ts: Timestamp(1547405178, 815), t: 1 }, majorityWriteDate: new Date(1547405178000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405179238), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405178, 815), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547405178, 815), $clusterTime: { clusterTime: Timestamp(1547405179, 105), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.262+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating mateo.node.gce-us-west1.admiral:27019 lastWriteDate to 2019-01-13T18:46:18.000+0000 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.262+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating mateo.node.gce-us-west1.admiral:27019 opTime to { ts: Timestamp(1547405178, 815), t: 1 } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.262+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.369+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.369+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host kratos.node.gce-europe-west3.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "kratos.node.gce-europe-west3.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547405178, 815), t: 1 }, lastWriteDate: new Date(1547405178000), majorityOpTime: { ts: Timestamp(1547405178, 815), t: 1 }, majorityWriteDate: new Date(1547405178000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405179310), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405178, 815), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547405178, 815), $clusterTime: { clusterTime: Timestamp(1547405179, 76), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.369+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating kratos.node.gce-europe-west3.admiral:27019 lastWriteDate to 2019-01-13T18:46:18.000+0000 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.369+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating kratos.node.gce-europe-west3.admiral:27019 opTime to { ts: Timestamp(1547405178, 815), t: 1 } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.369+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_config took 369 msec Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.369+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_east1 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.369+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.374+0000 D SHARDING [conn118] Command begin db: visitor_api msg id: 10 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.374+0000 D QUERY [conn118] Beginning planning... Jan 13 18:46:19 ivy mongos[27723]: ============================= Jan 13 18:46:19 ivy mongos[27723]: Options = NO_TABLE_SCAN Jan 13 18:46:19 ivy mongos[27723]: Canonical query: Jan 13 18:46:19 ivy mongos[27723]: ns=visitor_api.sessions4Tree: $and Jan 13 18:46:19 ivy mongos[27723]: r $eq "gce-us-east1" Jan 13 18:46:19 ivy mongos[27723]: u $lt "UxOVavlFZXtKRL5MnB+1uQ==" Jan 13 18:46:19 ivy mongos[27723]: Sort: {} Jan 13 18:46:19 ivy mongos[27723]: Proj: {} Jan 13 18:46:19 ivy mongos[27723]: ============================= Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.374+0000 D QUERY [conn118] Index 0 is kp: { r: 1.0, u: 1.0 } name: 'shardkey' Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.374+0000 D QUERY [conn118] Predicate over field 'u' Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.374+0000 D QUERY [conn118] Predicate over field 'r' Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.374+0000 D QUERY [conn118] Relevant index 0 is kp: { r: 1.0, u: 1.0 } name: 'shardkey' Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.374+0000 D QUERY [conn118] Rated tree: Jan 13 18:46:19 ivy mongos[27723]: $and Jan 13 18:46:19 ivy mongos[27723]: r $eq "gce-us-east1" || First: 0 notFirst: full path: r Jan 13 18:46:19 ivy mongos[27723]: u $lt "UxOVavlFZXtKRL5MnB+1uQ==" || First: notFirst: 0 full path: u Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.375+0000 D QUERY [conn118] Tagging memoID 1 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.375+0000 D QUERY [conn118] Enumerator: memo just before moving: Jan 13 18:46:19 ivy mongos[27723]: [Node #1]: AND enumstate counter 0 Jan 13 18:46:19 ivy mongos[27723]: choice 0: Jan 13 18:46:19 ivy mongos[27723]: subnodes: Jan 13 18:46:19 ivy mongos[27723]: idx[0] Jan 13 18:46:19 ivy mongos[27723]: pos 0 pred r $eq "gce-us-east1" Jan 13 18:46:19 ivy mongos[27723]: pos 1 pred u $lt "UxOVavlFZXtKRL5MnB+1uQ==" Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.375+0000 D QUERY [conn118] About to build solntree from tagged tree: Jan 13 18:46:19 ivy mongos[27723]: $and Jan 13 18:46:19 ivy mongos[27723]: r $eq "gce-us-east1" || Selected Index #0 pos 0 combine 1 Jan 13 18:46:19 ivy mongos[27723]: u $lt "UxOVavlFZXtKRL5MnB+1uQ==" || Selected Index #0 pos 1 combine 1 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.375+0000 D QUERY [conn118] Planner: adding solution: Jan 13 18:46:19 ivy mongos[27723]: FETCH Jan 13 18:46:19 ivy mongos[27723]: ---fetched = 1 Jan 13 18:46:19 ivy mongos[27723]: ---sortedByDiskLoc = 0 Jan 13 18:46:19 ivy mongos[27723]: ---getSort = [{ r: 1 }, { r: 1, u: 1 }, { u: 1 }, ] Jan 13 18:46:19 ivy mongos[27723]: ---Child: Jan 13 18:46:19 ivy mongos[27723]: ------IXSCAN Jan 13 18:46:19 ivy mongos[27723]: ---------indexName = shardkey Jan 13 18:46:19 ivy mongos[27723]: keyPattern = { r: 1.0, u: 1.0 } Jan 13 18:46:19 ivy mongos[27723]: ---------direction = 1 Jan 13 18:46:19 ivy mongos[27723]: ---------bounds = field #0['r']: ["gce-us-east1", "gce-us-east1"], field #1['u']: ["", "UxOVavlFZXtKRL5MnB+1uQ==") Jan 13 18:46:19 ivy mongos[27723]: ---------fetched = 0 Jan 13 18:46:19 ivy mongos[27723]: ---------sortedByDiskLoc = 0 Jan 13 18:46:19 ivy mongos[27723]: ---------getSort = [{ r: 1 }, { r: 1, u: 1 }, { u: 1 }, ] Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.375+0000 D QUERY [conn118] Planner: outputted 1 indexed solutions. Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.376+0000 D EXECUTOR [conn118] Scheduling remote command request: RemoteCommand 14088 -- target:phil.node.gce-us-east1.admiral:27017 db:visitor_api cmd:{ find: "sessions4", filter: { r: "gce-us-east1", u: { $lt: "UxOVavlFZXtKRL5MnB+1uQ==" } }, limit: 2, shardVersion: [ Timestamp(2076, 1), ObjectId('5c004c2bf113b95c328ec37a') ], lsid: { id: UUID("d48abc38-f0be-40e2-81e7-91d6097ed9e8"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.376+0000 D ASIO [conn118] startCommand: RemoteCommand 14088 -- target:phil.node.gce-us-east1.admiral:27017 db:visitor_api cmd:{ find: "sessions4", filter: { r: "gce-us-east1", u: { $lt: "UxOVavlFZXtKRL5MnB+1uQ==" } }, limit: 2, shardVersion: [ Timestamp(2076, 1), ObjectId('5c004c2bf113b95c328ec37a') ], lsid: { id: UUID("d48abc38-f0be-40e2-81e7-91d6097ed9e8"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.376+0000 I ASIO [TaskExecutorPool-0] Connecting to phil.node.gce-us-east1.admiral:27017 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.376+0000 D ASIO [TaskExecutorPool-0] Finished connection setup. Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.377+0000 D EXECUTOR [conn118] Scheduling remote command request: RemoteCommand 14089 -- target:queen.node.gce-us-east1.admiral:27017 db:visitor_api cmd:{ find: "sessions4", filter: { r: "gce-us-east1", u: { $lt: "UxOVavlFZXtKRL5MnB+1uQ==" } }, limit: 2, shardVersion: [ Timestamp(2076, 0), ObjectId('5c004c2bf113b95c328ec37a') ], lsid: { id: UUID("d48abc38-f0be-40e2-81e7-91d6097ed9e8"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.377+0000 D ASIO [conn118] startCommand: RemoteCommand 14089 -- target:queen.node.gce-us-east1.admiral:27017 db:visitor_api cmd:{ find: "sessions4", filter: { r: "gce-us-east1", u: { $lt: "UxOVavlFZXtKRL5MnB+1uQ==" } }, limit: 2, shardVersion: [ Timestamp(2076, 0), ObjectId('5c004c2bf113b95c328ec37a') ], lsid: { id: UUID("d48abc38-f0be-40e2-81e7-91d6097ed9e8"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.377+0000 I ASIO [TaskExecutorPool-0] Connecting to queen.node.gce-us-east1.admiral:27017 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.377+0000 D ASIO [TaskExecutorPool-0] Finished connection setup. Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.406+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.406+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host phil.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "zeta.node.gce-us-east1.admiral:27017", "phil.node.gce-us-east1.admiral:27017", "bambi.node.gce-us-central1.admiral:27017" ], arbiters: [ "elrond.node.gce-us-west1.admiral:27017", "dale.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1", setVersion: 19, ismaster: true, secondary: false, primary: "phil.node.gce-us-east1.admiral:27017", me: "phil.node.gce-us-east1.admiral:27017", electionId: ObjectId('7fffffff0000000000000016'), lastWrite: { opTime: { ts: Timestamp(1547405179, 312), t: 22 }, lastWriteDate: new Date(1547405179000), majorityOpTime: { ts: Timestamp(1547405179, 232), t: 22 }, majorityWriteDate: new Date(1547405179000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405179383), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405179, 312), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000016') }, lastCommittedOpTime: Timestamp(1547405179, 232), $configServerState: { opTime: { ts: Timestamp(1547405178, 815), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547405179, 312), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.406+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating phil.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T18:46:19.000+0000 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.406+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating phil.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547405179, 312), t: 22 } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.406+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.444+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.444+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host zeta.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "zeta.node.gce-us-east1.admiral:27017", "phil.node.gce-us-east1.admiral:27017", "bambi.node.gce-us-central1.admiral:27017" ], arbiters: [ "elrond.node.gce-us-west1.admiral:27017", "dale.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1", setVersion: 19, ismaster: false, secondary: true, primary: "phil.node.gce-us-east1.admiral:27017", me: "zeta.node.gce-us-east1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547405179, 269), t: 22 }, lastWriteDate: new Date(1547405179000), majorityOpTime: { ts: Timestamp(1547405179, 264), t: 22 }, majorityWriteDate: new Date(1547405179000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405179422), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405179, 269), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547405179, 264), $configServerState: { opTime: { ts: Timestamp(1547405176, 541), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547405179, 343), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.444+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating zeta.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T18:46:19.000+0000 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.444+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating zeta.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547405179, 269), t: 22 } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.444+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.445+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.445+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host bambi.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "zeta.node.gce-us-east1.admiral:27017", "phil.node.gce-us-east1.admiral:27017", "bambi.node.gce-us-central1.admiral:27017" ], arbiters: [ "elrond.node.gce-us-west1.admiral:27017", "dale.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1", setVersion: 19, ismaster: false, secondary: true, primary: "phil.node.gce-us-east1.admiral:27017", me: "bambi.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547405179, 300), t: 22 }, lastWriteDate: new Date(1547405179000), majorityOpTime: { ts: Timestamp(1547405179, 264), t: 22 }, majorityWriteDate: new Date(1547405179000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405179441), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405179, 300), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547405179, 264), $configServerState: { opTime: { ts: Timestamp(1547405163, 309), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547405179, 335), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.445+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating bambi.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T18:46:19.000+0000 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.445+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating bambi.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547405179, 300), t: 22 } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.445+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_east1 took 76 msec Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.445+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_central1 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.445+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.447+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.447+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host camden.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "percy.node.gce-us-central1.admiral:27017", "umbra.node.gce-us-west1.admiral:27017", "camden.node.gce-us-central1.admiral:27017" ], arbiters: [ "desmond.node.gce-us-east1.admiral:27017", "flint.node.gce-us-west1.admiral:27017" ], setName: "sessions_gce_us_central1", setVersion: 6, ismaster: true, secondary: false, primary: "camden.node.gce-us-central1.admiral:27017", me: "camden.node.gce-us-central1.admiral:27017", electionId: ObjectId('7fffffff0000000000000004'), lastWrite: { opTime: { ts: Timestamp(1547405179, 340), t: 4 }, lastWriteDate: new Date(1547405179000), majorityOpTime: { ts: Timestamp(1547405179, 263), t: 4 }, majorityWriteDate: new Date(1547405179000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405179442), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405179, 340), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000004') }, lastCommittedOpTime: Timestamp(1547405179, 263), $configServerState: { opTime: { ts: Timestamp(1547405178, 815), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547405179, 340), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.447+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating camden.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T18:46:19.000+0000 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.447+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating camden.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547405179, 340), t: 4 } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.447+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.448+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.448+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host percy.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "percy.node.gce-us-central1.admiral:27017", "umbra.node.gce-us-west1.admiral:27017", "camden.node.gce-us-central1.admiral:27017" ], arbiters: [ "desmond.node.gce-us-east1.admiral:27017", "flint.node.gce-us-west1.admiral:27017" ], setName: "sessions_gce_us_central1", setVersion: 6, ismaster: false, secondary: true, primary: "camden.node.gce-us-central1.admiral:27017", me: "percy.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547405179, 339), t: 4 }, lastWriteDate: new Date(1547405179000), majorityOpTime: { ts: Timestamp(1547405179, 263), t: 4 }, majorityWriteDate: new Date(1547405179000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405179443), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405179, 339), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547405179, 263), $configServerState: { opTime: { ts: Timestamp(1547405178, 815), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547405179, 340), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.448+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating percy.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T18:46:19.000+0000 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.448+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating percy.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547405179, 339), t: 4 } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.448+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.487+0000 D NETWORK [TaskExecutorPool-0] Starting client-side compression negotiation Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.487+0000 D NETWORK [TaskExecutorPool-0] Offering snappy compressor to server Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.487+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.488+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.488+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host umbra.node.gce-us-west1.admiral:27017 based on ismaster reply: { hosts: [ "percy.node.gce-us-central1.admiral:27017", "umbra.node.gce-us-west1.admiral:27017", "camden.node.gce-us-central1.admiral:27017" ], arbiters: [ "desmond.node.gce-us-east1.admiral:27017", "flint.node.gce-us-west1.admiral:27017" ], setName: "sessions_gce_us_central1", setVersion: 6, ismaster: false, secondary: true, primary: "camden.node.gce-us-central1.admiral:27017", me: "umbra.node.gce-us-west1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547405179, 300), t: 4 }, lastWriteDate: new Date(1547405179000), majorityOpTime: { ts: Timestamp(1547405179, 235), t: 4 }, majorityWriteDate: new Date(1547405179000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405179463), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405179, 300), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547405179, 235), $configServerState: { opTime: { ts: Timestamp(1547405170, 315), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547405179, 318), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.488+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating umbra.node.gce-us-west1.admiral:27017 lastWriteDate to 2019-01-13T18:46:19.000+0000 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.488+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating umbra.node.gce-us-west1.admiral:27017 opTime to { ts: Timestamp(1547405179, 300), t: 4 } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.488+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_central1 took 42 msec Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.488+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_west1 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.488+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.524+0000 D NETWORK [TaskExecutorPool-0] Finishing client-side compression negotiation Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.524+0000 D NETWORK [TaskExecutorPool-0] Received message compressors from server Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.524+0000 D NETWORK [TaskExecutorPool-0] Adding compressor snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.524+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.524+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.524+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.524+0000 D NETWORK [conn118] Compressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.528+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.528+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host tony.node.gce-us-west1.admiral:27017 based on ismaster reply: { hosts: [ "william.node.gce-us-west1.admiral:27017", "tony.node.gce-us-west1.admiral:27017", "chloe.node.gce-us-central1.admiral:27017" ], arbiters: [ "sarah.node.gce-us-east1.admiral:27017", "gerda.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_west1", setVersion: 82625, ismaster: true, secondary: false, primary: "tony.node.gce-us-west1.admiral:27017", me: "tony.node.gce-us-west1.admiral:27017", electionId: ObjectId('7fffffff000000000000001c'), lastWrite: { opTime: { ts: Timestamp(1547405179, 365), t: 28 }, lastWriteDate: new Date(1547405179000), majorityOpTime: { ts: Timestamp(1547405179, 262), t: 28 }, majorityWriteDate: new Date(1547405179000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405179503), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405179, 365), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff000000000000001c') }, lastCommittedOpTime: Timestamp(1547405179, 262), $configServerState: { opTime: { ts: Timestamp(1547405178, 815), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547405179, 365), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.528+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating tony.node.gce-us-west1.admiral:27017 lastWriteDate to 2019-01-13T18:46:19.000+0000 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.528+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating tony.node.gce-us-west1.admiral:27017 opTime to { ts: Timestamp(1547405179, 365), t: 28 } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.528+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.559+0000 D NETWORK [TaskExecutorPool-0] Starting client-side compression negotiation Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.560+0000 D NETWORK [TaskExecutorPool-0] Offering snappy compressor to server Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.560+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.568+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.568+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host william.node.gce-us-west1.admiral:27017 based on ismaster reply: { hosts: [ "william.node.gce-us-west1.admiral:27017", "tony.node.gce-us-west1.admiral:27017", "chloe.node.gce-us-central1.admiral:27017" ], arbiters: [ "sarah.node.gce-us-east1.admiral:27017", "gerda.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_west1", setVersion: 82625, ismaster: false, secondary: true, primary: "tony.node.gce-us-west1.admiral:27017", me: "william.node.gce-us-west1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547405179, 387), t: 28 }, lastWriteDate: new Date(1547405179000), majorityOpTime: { ts: Timestamp(1547405179, 299), t: 28 }, majorityWriteDate: new Date(1547405179000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405179543), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405179, 387), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547405179, 299), $configServerState: { opTime: { ts: Timestamp(1547405171, 3), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547405179, 403), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.568+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating william.node.gce-us-west1.admiral:27017 lastWriteDate to 2019-01-13T18:46:19.000+0000 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.568+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating william.node.gce-us-west1.admiral:27017 opTime to { ts: Timestamp(1547405179, 387), t: 28 } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.568+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.570+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.570+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host chloe.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "william.node.gce-us-west1.admiral:27017", "tony.node.gce-us-west1.admiral:27017", "chloe.node.gce-us-central1.admiral:27017" ], arbiters: [ "sarah.node.gce-us-east1.admiral:27017", "gerda.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_west1", setVersion: 82625, ismaster: false, secondary: true, primary: "tony.node.gce-us-west1.admiral:27017", me: "chloe.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547405179, 363), t: 28 }, lastWriteDate: new Date(1547405179000), majorityOpTime: { ts: Timestamp(1547405179, 262), t: 28 }, majorityWriteDate: new Date(1547405179000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405179564), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405179, 363), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547405179, 262), $configServerState: { opTime: { ts: Timestamp(1547405169, 831), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547405179, 368), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.570+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating chloe.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T18:46:19.000+0000 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.570+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating chloe.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547405179, 363), t: 28 } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.570+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_west1 took 81 msec Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.570+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_europe_west1 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.570+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.596+0000 D NETWORK [TaskExecutorPool-0] Finishing client-side compression negotiation Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.596+0000 D NETWORK [TaskExecutorPool-0] Received message compressors from server Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.596+0000 D NETWORK [TaskExecutorPool-0] Adding compressor snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.596+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.596+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.596+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.596+0000 D NETWORK [conn118] Compressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.641+0000 D NETWORK [conn118] Decompressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.642+0000 D ASIO [conn118] Request 14089 finished with response: { cursor: { firstBatch: [ { _id: ObjectId('5c392e00dea2eefbafba7371'), r: "gce-us-east1", u: "Tr+ySHb1u21EANpu5kfaIg==", pid: "A-599B47972741B944581A1687-1", oid: "", e: true, incr: 4, tsuc: 1534260320, tsc: 1547251200, pc: 286, pi: 632, v: 4, ss: 0, tslp: 1547337337, uf: 1, dsb: new Date(1546287438000), sf: 1, tse: 1547337639, dse: new Date(1547337639000) }, { _id: ObjectId('5c392e00a80895a8ddfa4a49'), r: "gce-us-east1", u: "Gq0RNIHgRTnPVgVG72cLcw==", pid: "A-599B47972741B944581A1687-1", oid: "", e: true, incr: 3, tsuc: 1547223079, tsc: 1547251200, pc: 287, pi: 378, v: 4, ss: 0, tslp: 1547337584, tse: 1547337886, dse: new Date(1547337886000) } ], id: 0, ns: "visitor_api.sessions4" }, ok: 1.0, operationTime: Timestamp(1547405179, 510), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000003') }, lastCommittedOpTime: Timestamp(1547405179, 396), $configServerState: { opTime: { ts: Timestamp(1547405178, 815), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547405179, 510), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.642+0000 D EXECUTOR [conn118] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: ObjectId('5c392e00dea2eefbafba7371'), r: "gce-us-east1", u: "Tr+ySHb1u21EANpu5kfaIg==", pid: "A-599B47972741B944581A1687-1", oid: "", e: true, incr: 4, tsuc: 1534260320, tsc: 1547251200, pc: 286, pi: 632, v: 4, ss: 0, tslp: 1547337337, uf: 1, dsb: new Date(1546287438000), sf: 1, tse: 1547337639, dse: new Date(1547337639000) }, { _id: ObjectId('5c392e00a80895a8ddfa4a49'), r: "gce-us-east1", u: "Gq0RNIHgRTnPVgVG72cLcw==", pid: "A-599B47972741B944581A1687-1", oid: "", e: true, incr: 3, tsuc: 1547223079, tsc: 1547251200, pc: 287, pi: 378, v: 4, ss: 0, tslp: 1547337584, tse: 1547337886, dse: new Date(1547337886000) } ], id: 0, ns: "visitor_api.sessions4" }, ok: 1.0, operationTime: Timestamp(1547405179, 510), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000003') }, lastCommittedOpTime: Timestamp(1547405179, 396), $configServerState: { opTime: { ts: Timestamp(1547405178, 815), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547405179, 510), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.670+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.671+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host vivi.node.gce-europe-west1.admiral:27017 based on ismaster reply: { hosts: [ "vivi.node.gce-europe-west1.admiral:27017", "hilda.node.gce-europe-west2.admiral:27017" ], arbiters: [ "hubert.node.gce-europe-west3.admiral:27017" ], setName: "sessions_gce_europe_west1", setVersion: 4, ismaster: true, secondary: false, primary: "vivi.node.gce-europe-west1.admiral:27017", me: "vivi.node.gce-europe-west1.admiral:27017", electionId: ObjectId('7fffffff0000000000000009'), lastWrite: { opTime: { ts: Timestamp(1547405179, 445), t: 9 }, lastWriteDate: new Date(1547405179000), majorityOpTime: { ts: Timestamp(1547405179, 430), t: 9 }, majorityWriteDate: new Date(1547405179000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405179615), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405179, 445), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000009') }, lastCommittedOpTime: Timestamp(1547405179, 430), $configServerState: { opTime: { ts: Timestamp(1547405178, 815), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547405179, 445), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.671+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating vivi.node.gce-europe-west1.admiral:27017 lastWriteDate to 2019-01-13T18:46:19.000+0000 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.671+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating vivi.node.gce-europe-west1.admiral:27017 opTime to { ts: Timestamp(1547405179, 445), t: 9 } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.671+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.767+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.767+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host hilda.node.gce-europe-west2.admiral:27017 based on ismaster reply: { hosts: [ "vivi.node.gce-europe-west1.admiral:27017", "hilda.node.gce-europe-west2.admiral:27017" ], arbiters: [ "hubert.node.gce-europe-west3.admiral:27017" ], setName: "sessions_gce_europe_west1", setVersion: 4, ismaster: false, secondary: true, primary: "vivi.node.gce-europe-west1.admiral:27017", me: "hilda.node.gce-europe-west2.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547405179, 526), t: 9 }, lastWriteDate: new Date(1547405179000), majorityOpTime: { ts: Timestamp(1547405179, 507), t: 9 }, majorityWriteDate: new Date(1547405179000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405179715), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405179, 526), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000008') }, lastCommittedOpTime: Timestamp(1547405179, 507), $configServerState: { opTime: { ts: Timestamp(1547405172, 857), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547405179, 547), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.767+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating hilda.node.gce-europe-west2.admiral:27017 lastWriteDate to 2019-01-13T18:46:19.000+0000 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.767+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating hilda.node.gce-europe-west2.admiral:27017 opTime to { ts: Timestamp(1547405179, 526), t: 9 } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.767+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_europe_west1 took 197 msec Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.767+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_europe_west2 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.767+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.863+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.863+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host ignis.node.gce-europe-west2.admiral:27017 based on ismaster reply: { hosts: [ "ignis.node.gce-europe-west2.admiral:27017", "keith.node.gce-europe-west3.admiral:27017" ], arbiters: [ "francis.node.gce-europe-west1.admiral:27017" ], setName: "sessions_gce_europe_west2", setVersion: 6, ismaster: true, secondary: false, primary: "ignis.node.gce-europe-west2.admiral:27017", me: "ignis.node.gce-europe-west2.admiral:27017", electionId: ObjectId('7fffffff0000000000000004'), lastWrite: { opTime: { ts: Timestamp(1547405179, 620), t: 4 }, lastWriteDate: new Date(1547405179000), majorityOpTime: { ts: Timestamp(1547405179, 558), t: 4 }, majorityWriteDate: new Date(1547405179000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405179810), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405179, 620), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000004') }, lastCommittedOpTime: Timestamp(1547405179, 558), $configServerState: { opTime: { ts: Timestamp(1547405179, 242), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547405179, 620), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.863+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ignis.node.gce-europe-west2.admiral:27017 lastWriteDate to 2019-01-13T18:46:19.000+0000 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.863+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ignis.node.gce-europe-west2.admiral:27017 opTime to { ts: Timestamp(1547405179, 620), t: 4 } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.863+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.970+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.970+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host keith.node.gce-europe-west3.admiral:27017 based on ismaster reply: { hosts: [ "ignis.node.gce-europe-west2.admiral:27017", "keith.node.gce-europe-west3.admiral:27017" ], arbiters: [ "francis.node.gce-europe-west1.admiral:27017" ], setName: "sessions_gce_europe_west2", setVersion: 6, ismaster: false, secondary: true, primary: "ignis.node.gce-europe-west2.admiral:27017", me: "keith.node.gce-europe-west3.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547405179, 677), t: 4 }, lastWriteDate: new Date(1547405179000), majorityOpTime: { ts: Timestamp(1547405179, 620), t: 4 }, majorityWriteDate: new Date(1547405179000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405179912), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405179, 677), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547405179, 620), $configServerState: { opTime: { ts: Timestamp(1547405171, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547405179, 677), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.970+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating keith.node.gce-europe-west3.admiral:27017 lastWriteDate to 2019-01-13T18:46:19.000+0000 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.970+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating keith.node.gce-europe-west3.admiral:27017 opTime to { ts: Timestamp(1547405179, 677), t: 4 } Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.970+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_europe_west2 took 202 msec Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.970+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_europe_west3 Jan 13 18:46:19 ivy mongos[27723]: 2019-01-13T18:46:19.970+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.076+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.076+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host albert.node.gce-europe-west3.admiral:27017 based on ismaster reply: { hosts: [ "albert.node.gce-europe-west3.admiral:27017", "jordan.node.gce-europe-west1.admiral:27017" ], arbiters: [ "garry.node.gce-europe-west2.admiral:27017" ], setName: "sessions_gce_europe_west3", setVersion: 6, ismaster: true, secondary: false, primary: "albert.node.gce-europe-west3.admiral:27017", me: "albert.node.gce-europe-west3.admiral:27017", electionId: ObjectId('7fffffff000000000000000a'), lastWrite: { opTime: { ts: Timestamp(1547405180, 5), t: 10 }, lastWriteDate: new Date(1547405180000), majorityOpTime: { ts: Timestamp(1547405179, 763), t: 10 }, majorityWriteDate: new Date(1547405179000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405180018), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405180, 5), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff000000000000000a') }, lastCommittedOpTime: Timestamp(1547405179, 763), $configServerState: { opTime: { ts: Timestamp(1547405179, 583), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547405180, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.079+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating albert.node.gce-europe-west3.admiral:27017 lastWriteDate to 2019-01-13T18:46:20.000+0000 Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.079+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating albert.node.gce-europe-west3.admiral:27017 opTime to { ts: Timestamp(1547405180, 5), t: 10 } Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.079+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.179+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.180+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host jordan.node.gce-europe-west1.admiral:27017 based on ismaster reply: { hosts: [ "albert.node.gce-europe-west3.admiral:27017", "jordan.node.gce-europe-west1.admiral:27017" ], arbiters: [ "garry.node.gce-europe-west2.admiral:27017" ], setName: "sessions_gce_europe_west3", setVersion: 6, ismaster: false, secondary: true, primary: "albert.node.gce-europe-west3.admiral:27017", me: "jordan.node.gce-europe-west1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547405180, 62), t: 10 }, lastWriteDate: new Date(1547405180000), majorityOpTime: { ts: Timestamp(1547405180, 26), t: 10 }, majorityWriteDate: new Date(1547405180000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405180125), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405180, 62), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000009') }, lastCommittedOpTime: Timestamp(1547405180, 26), $configServerState: { opTime: { ts: Timestamp(1547405175, 513), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547405180, 63), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.180+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jordan.node.gce-europe-west1.admiral:27017 lastWriteDate to 2019-01-13T18:46:20.000+0000 Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.180+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jordan.node.gce-europe-west1.admiral:27017 opTime to { ts: Timestamp(1547405180, 62), t: 10 } Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.180+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_europe_west3 took 209 msec Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.180+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_east1_2 Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.180+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.216+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.216+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host queen.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: true, secondary: false, primary: "queen.node.gce-us-east1.admiral:27017", me: "queen.node.gce-us-east1.admiral:27017", electionId: ObjectId('7fffffff0000000000000003'), lastWrite: { opTime: { ts: Timestamp(1547405180, 160), t: 3 }, lastWriteDate: new Date(1547405180000), majorityOpTime: { ts: Timestamp(1547405180, 86), t: 3 }, majorityWriteDate: new Date(1547405180000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405180195), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405180, 160), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000003') }, lastCommittedOpTime: Timestamp(1547405180, 86), $configServerState: { opTime: { ts: Timestamp(1547405179, 584), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547405180, 160), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.216+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating queen.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T18:46:20.000+0000 Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.216+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating queen.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547405180, 160), t: 3 } Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.216+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.218+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.218+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host ralph.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: false, secondary: true, primary: "queen.node.gce-us-east1.admiral:27017", me: "ralph.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547405180, 124), t: 3 }, lastWriteDate: new Date(1547405180000), majorityOpTime: { ts: Timestamp(1547405180, 54), t: 3 }, majorityWriteDate: new Date(1547405180000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405180213), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405180, 124), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547405180, 54), $configServerState: { opTime: { ts: Timestamp(1547405168, 435), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547405180, 143), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.218+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ralph.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T18:46:20.000+0000 Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.218+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ralph.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547405180, 124), t: 3 } Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.218+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.255+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.256+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host april.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: false, secondary: true, primary: "queen.node.gce-us-east1.admiral:27017", me: "april.node.gce-us-east1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547405180, 180), t: 3 }, lastWriteDate: new Date(1547405180000), majorityOpTime: { ts: Timestamp(1547405180, 86), t: 3 }, majorityWriteDate: new Date(1547405180000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547405180228), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547405180, 180), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547405180, 86), $configServerState: { opTime: { ts: Timestamp(1547405174, 983), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547405180, 199), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.256+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating april.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T18:46:20.000+0000 Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.256+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating april.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547405180, 180), t: 3 } Jan 13 18:46:20 ivy mongos[27723]: 2019-01-13T18:46:20.256+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_east1_2 took 75 msec Jan 13 18:46:22 ivy mongos[27723]: 2019-01-13T18:46:22.482+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 18:46:22 ivy mongos[27723]: 2019-01-13T18:46:22.521+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 18:46:22 ivy mongos[27723]: 2019-01-13T18:46:22.521+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.109+0000 D TRACKING [Uptime reporter] Cmd: NotSet, TrackingId: 5c3b8780a1824195fadc68bb Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.109+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 14091 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T18:46:54.109+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547405184109), up: 11473, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.109+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 14091 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T18:46:54.109+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547405184109), up: 11473, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.109+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.109+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.109+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.109+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.289+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.290+0000 D ASIO [ShardRegistry] Request 14091 finished with response: { n: 1, nModified: 1, opTime: { ts: Timestamp(1547405184, 54), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547405184, 54), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547405184, 54), t: 1 }, lastOpVisible: { ts: Timestamp(1547405184, 54), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547405184, 54), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405184, 54), $clusterTime: { clusterTime: Timestamp(1547405184, 223), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.290+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ n: 1, nModified: 1, opTime: { ts: Timestamp(1547405184, 54), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547405184, 54), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547405184, 54), t: 1 }, lastOpVisible: { ts: Timestamp(1547405184, 54), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547405184, 54), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405184, 54), $clusterTime: { clusterTime: Timestamp(1547405184, 223), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.290+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.290+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 14092 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T18:46:54.290+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547405184, 54), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.290+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 14092 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T18:46:54.290+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547405184, 54), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.290+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.290+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.290+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.290+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.328+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.329+0000 D ASIO [ShardRegistry] Request 14092 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547405184, 223), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547405184, 54), t: 1 }, lastOpVisible: { ts: Timestamp(1547405184, 54), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547405184, 54), $clusterTime: { clusterTime: Timestamp(1547405184, 223), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.329+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547405184, 223), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547405184, 54), t: 1 }, lastOpVisible: { ts: Timestamp(1547405184, 54), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547405184, 54), $clusterTime: { clusterTime: Timestamp(1547405184, 223), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.329+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.329+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 14093 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T18:46:54.329+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547405184, 54), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.329+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 14093 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T18:46:54.329+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547405184, 54), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.329+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.329+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.329+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.329+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.365+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.365+0000 D ASIO [ShardRegistry] Request 14093 finished with response: { cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547405184, 169), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547405184, 169), t: 1 }, lastOpVisible: { ts: Timestamp(1547405184, 169), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547405184, 54), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405184, 169), $clusterTime: { clusterTime: Timestamp(1547405184, 248), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.365+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547405184, 169), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547405184, 169), t: 1 }, lastOpVisible: { ts: Timestamp(1547405184, 169), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547405184, 54), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405184, 169), $clusterTime: { clusterTime: Timestamp(1547405184, 248), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.366+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 14094 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T18:46:54.366+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547405184, 169), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.366+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 14094 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T18:46:54.366+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547405184, 169), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.366+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.366+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.366+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.366+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.366+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.404+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.404+0000 D ASIO [ShardRegistry] Request 14094 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547405184, 223), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547405184, 169), t: 1 }, lastOpVisible: { ts: Timestamp(1547405184, 169), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547405184, 169), $clusterTime: { clusterTime: Timestamp(1547405184, 248), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.404+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547405184, 223), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547405184, 169), t: 1 }, lastOpVisible: { ts: Timestamp(1547405184, 169), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547405184, 169), $clusterTime: { clusterTime: Timestamp(1547405184, 248), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:24 ivy mongos[27723]: 2019-01-13T18:46:24.404+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.223+0000 D SHARDING [conn42] Command begin db: admin msg id: 22485 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.223+0000 D SHARDING [conn42] Command end db: admin msg id: 22485 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.223+0000 I COMMAND [conn42] query admin.1 command: { buildInfo: "1", $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:1340 0ms Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.226+0000 D SHARDING [conn42] Command begin db: admin msg id: 22487 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.226+0000 D NETWORK [conn42] Starting server-side compression negotiation Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.226+0000 D NETWORK [conn42] Compression negotiation not requested by client Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.226+0000 D SHARDING [conn42] Command end db: admin msg id: 22487 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.226+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.227+0000 D SHARDING [conn42] Command begin db: admin msg id: 22489 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.227+0000 D SHARDING [conn42] Command end db: admin msg id: 22489 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.227+0000 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $db: "admin" } numYields:0 reslen:10255 protocol:op_query 0ms Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.229+0000 D SHARDING [conn42] Command begin db: config msg id: 22491 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.229+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 14095 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.229+0000 D ASIO [conn42] startCommand: RemoteCommand 14095 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.229+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.229+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.229+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.229+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.265+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.265+0000 D ASIO [conn42] Request 14095 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547405189, 55), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405188, 665), $clusterTime: { clusterTime: Timestamp(1547405189, 134), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.265+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547405189, 55), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405188, 665), $clusterTime: { clusterTime: Timestamp(1547405189, 134), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.266+0000 D SHARDING [conn42] Command end db: config msg id: 22491 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.266+0000 I COMMAND [conn42] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 36ms Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.266+0000 D SHARDING [conn42] Command begin db: config msg id: 22493 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.266+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b8785a1824195fadc68c4 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.266+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 14096 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.266+0000 D ASIO [conn42] startCommand: RemoteCommand 14096 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.266+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.266+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.266+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.266+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.331+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.331+0000 D ASIO [ShardRegistry] Request 14096 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547405189, 149), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547405188, 665), t: 1 }, lastOpVisible: { ts: Timestamp(1547405188, 665), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547405184, 54), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405188, 665), $clusterTime: { clusterTime: Timestamp(1547405189, 164), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.331+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547405189, 149), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547405188, 665), t: 1 }, lastOpVisible: { ts: Timestamp(1547405188, 665), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547405184, 54), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405188, 665), $clusterTime: { clusterTime: Timestamp(1547405189, 164), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.332+0000 D SHARDING [conn42] Command end db: config msg id: 22493 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.332+0000 I COMMAND [conn42] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 65ms Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.332+0000 D SHARDING [conn42] Command begin db: config msg id: 22495 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.332+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 14097 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.332+0000 D ASIO [conn42] startCommand: RemoteCommand 14097 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.332+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.332+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.332+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.332+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.369+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.369+0000 D ASIO [conn42] Request 14097 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547405189, 149), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405189, 55), $clusterTime: { clusterTime: Timestamp(1547405189, 196), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.369+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547405189, 149), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405189, 55), $clusterTime: { clusterTime: Timestamp(1547405189, 196), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.369+0000 D SHARDING [conn42] Command end db: config msg id: 22495 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.369+0000 I COMMAND [conn42] query config.settings command: { find: "settings", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:315 37ms Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.370+0000 D SHARDING [conn42] Command begin db: config msg id: 22497 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.370+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b8785a1824195fadc68c7 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.370+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 14098 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547404589369) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.370+0000 D ASIO [conn42] startCommand: RemoteCommand 14098 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547404589369) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.370+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.370+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.370+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.370+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.440+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.441+0000 D ASIO [ShardRegistry] Request 14098 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547405189, 270), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547405189, 148), t: 1 }, lastOpVisible: { ts: Timestamp(1547405189, 148), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547405184, 54), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405189, 148), $clusterTime: { clusterTime: Timestamp(1547405189, 270), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.441+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547405189, 270), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547405189, 148), t: 1 }, lastOpVisible: { ts: Timestamp(1547405189, 148), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547405184, 54), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405189, 148), $clusterTime: { clusterTime: Timestamp(1547405189, 270), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.441+0000 D SHARDING [conn42] Command end db: config msg id: 22497 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.441+0000 I COMMAND [conn42] query config.changelog command: { aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547404589369) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:245 71ms Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.441+0000 D SHARDING [conn42] Command begin db: config msg id: 22499 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.441+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 14099 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.441+0000 D ASIO [conn42] startCommand: RemoteCommand 14099 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.441+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.441+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.441+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.441+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.479+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.479+0000 D ASIO [conn42] Request 14099 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547405189, 270), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405189, 148), $clusterTime: { clusterTime: Timestamp(1547405189, 274), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.479+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547405189, 270), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405189, 148), $clusterTime: { clusterTime: Timestamp(1547405189, 274), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.479+0000 D SHARDING [conn42] Command end db: config msg id: 22499 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.479+0000 I COMMAND [conn42] query config.shards command: { find: "shards", filter: {}, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:1834 38ms Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.480+0000 D SHARDING [conn42] Command begin db: config msg id: 22501 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.480+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 14100 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.480+0000 D ASIO [conn42] startCommand: RemoteCommand 14100 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.480+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.480+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.480+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.480+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.517+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.517+0000 D ASIO [conn42] Request 14100 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547405189, 270), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405189, 149), $clusterTime: { clusterTime: Timestamp(1547405189, 283), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.517+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547405189, 270), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405189, 149), $clusterTime: { clusterTime: Timestamp(1547405189, 283), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.517+0000 D SHARDING [conn42] Command end db: config msg id: 22501 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.517+0000 I COMMAND [conn42] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 37ms Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.518+0000 D SHARDING [conn42] Command begin db: config msg id: 22503 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.518+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b8785a1824195fadc68cb Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.518+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 14101 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.518+0000 D ASIO [conn42] startCommand: RemoteCommand 14101 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.518+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.518+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.518+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.518+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.553+0000 D SHARDING [conn70] Command begin db: admin msg id: 1017 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.553+0000 D SHARDING [conn70] Command end db: admin msg id: 1017 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.553+0000 I COMMAND [conn70] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:178 protocol:op_query 0ms Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.609+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.609+0000 D ASIO [ShardRegistry] Request 14101 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547405189, 270), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547405189, 149), t: 1 }, lastOpVisible: { ts: Timestamp(1547405189, 149), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547405184, 54), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405189, 149), $clusterTime: { clusterTime: Timestamp(1547405189, 335), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.609+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547405189, 270), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547405189, 149), t: 1 }, lastOpVisible: { ts: Timestamp(1547405189, 149), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547405184, 54), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405189, 149), $clusterTime: { clusterTime: Timestamp(1547405189, 335), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.610+0000 D SHARDING [conn42] Command end db: config msg id: 22503 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.610+0000 I COMMAND [conn42] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 91ms Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.610+0000 D SHARDING [conn42] Command begin db: config msg id: 22505 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.610+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b8785a1824195fadc68ce Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.610+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 14102 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.610+0000 D ASIO [conn42] startCommand: RemoteCommand 14102 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.610+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.610+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.610+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.610+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.647+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.647+0000 D ASIO [ShardRegistry] Request 14102 finished with response: { cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547405189, 270), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547405189, 270), t: 1 }, lastOpVisible: { ts: Timestamp(1547405189, 270), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547405184, 54), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405189, 270), $clusterTime: { clusterTime: Timestamp(1547405189, 388), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.647+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547405189, 270), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547405189, 270), t: 1 }, lastOpVisible: { ts: Timestamp(1547405189, 270), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547405184, 54), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405189, 270), $clusterTime: { clusterTime: Timestamp(1547405189, 388), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.647+0000 D SHARDING [conn42] Command end db: config msg id: 22505 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.647+0000 I COMMAND [conn42] query config.databases command: { aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:270 37ms Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.648+0000 D SHARDING [conn42] Command begin db: config msg id: 22507 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.648+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 14103 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.648+0000 D ASIO [conn42] startCommand: RemoteCommand 14103 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.648+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.648+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.649+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.649+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.685+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.685+0000 D ASIO [conn42] Request 14103 finished with response: { n: 3, ok: 1.0, operationTime: Timestamp(1547405189, 270), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405189, 270), $clusterTime: { clusterTime: Timestamp(1547405189, 388), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.685+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 3, ok: 1.0, operationTime: Timestamp(1547405189, 270), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405189, 270), $clusterTime: { clusterTime: Timestamp(1547405189, 388), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.685+0000 D SHARDING [conn42] Command end db: config msg id: 22507 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.685+0000 I COMMAND [conn42] query config.collections command: { count: "collections", query: { dropped: false }, $db: "config" } numYields:0 reslen:210 36ms Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.687+0000 D SHARDING [conn42] Command begin db: config msg id: 22509 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.687+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 14104 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547404589687) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.687+0000 D ASIO [conn42] startCommand: RemoteCommand 14104 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547404589687) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.687+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.687+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.687+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.687+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.724+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.724+0000 D ASIO [conn42] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Request 14104 finished with response: { cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547405182678), up: 3498379, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547405187693), up: 3444524, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547405188721), up: 3498286, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405185214), up: 12124, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405187332), up: 86133, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405179951), up: 86151, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405184999), up: 86130, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405188052), up: 86106, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405187774), up: 86106, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405179683), up: 86069, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.node.gce-us Jan 13 18:46:29 ivy mongos[27723]: -east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405187497), up: 86049, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405186715), up: 86076, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405183206), up: 86045, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405180832), up: 86017, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405179679), up: 86016, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405180507), up: 85962, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405182533), up: 85993, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405187263), up: 85998, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405185274), up: 85966, waiting: true }, { _id: "jaco .......... 67, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405189098), up: 86540, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405185242), up: 86572, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405185558), up: 87331, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", "u Jan 13 18:46:29 ivy mongos[27723]: rban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405183131), up: 87388, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405184755), up: 87391, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405182475), up: 87328, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405185168), up: 87920, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405186601), up: 87922, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405179742), up: 87855, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405185035), up: 87710, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405187692), up: 87863, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405181968), up: 87645, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405188724), up: 87714, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405182455), up: 87645, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405188626), up: 87526, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405185037), up: 87585, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405179507), u Jan 13 18:46:29 ivy mongos[27723]: p: 87581, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405184109), up: 11473, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405188624), up: 87465, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405179506), up: 87517, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547405189, 270), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405189, 270), $clusterTime: { clusterTime: Timestamp(1547405189, 412), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.724+0000 D EXECUTOR [conn42] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547405182678), up: 3498379, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547405187693), up: 3444524, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547405188721), up: 3498286, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405185214), up: 12124, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405187332), up: 86133, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405179951), up: 86151, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405184999), up: 86130, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405188052), up: 86106, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405187774), up: 86106, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405179683), up: 86069, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie. Jan 13 18:46:29 ivy mongos[27723]: node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405187497), up: 86049, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405186715), up: 86076, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405183206), up: 86045, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405180832), up: 86017, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405179679), up: 86016, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405180507), up: 85962, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405182533), up: 85993, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405187263), up: 85998, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405185274), up: 85966, waiting: true }, { .......... 67, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405189098), up: 86540, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405185242), up: 86572, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405185558), up: 87331, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", "u Jan 13 18:46:29 ivy mongos[27723]: rban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405183131), up: 87388, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547405184755), up: 87391, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405182475), up: 87328, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405185168), up: 87920, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405186601), up: 87922, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405179742), up: 87855, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405185035), up: 87710, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405187692), up: 87863, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405181968), up: 87645, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405188724), up: 87714, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405182455), up: 87645, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405188626), up: 87526, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405185037), up: 87585, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405179507), u Jan 13 18:46:29 ivy mongos[27723]: p: 87581, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405184109), up: 11473, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405188624), up: 87465, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547405179506), up: 87517, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547405189, 270), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405189, 270), $clusterTime: { clusterTime: Timestamp(1547405189, 412), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.724+0000 D SHARDING [conn42] Command end db: config msg id: 22509 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.724+0000 I COMMAND [conn42] query config.mongos command: { find: "mongos", filter: { ping: { $gte: new Date(1547404589687) } }, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:63 reslen:9894 37ms Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.726+0000 D SHARDING [conn42] Command begin db: config msg id: 22511 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.726+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 14105 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.726+0000 D ASIO [conn42] startCommand: RemoteCommand 14105 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.726+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.726+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.726+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.726+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.763+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.763+0000 D ASIO [conn42] Request 14105 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547405189, 270), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405189, 270), $clusterTime: { clusterTime: Timestamp(1547405189, 412), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.763+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547405189, 270), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547405189, 270), $clusterTime: { clusterTime: Timestamp(1547405189, 412), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.763+0000 D SHARDING [conn42] Command end db: config msg id: 22511 Jan 13 18:46:29 ivy mongos[27723]: 2019-01-13T18:46:29.763+0000 I COMMAND [conn42] query config.locks command: { find: "locks", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:241 37ms