-- Logs begin at Fri 2019-01-11 22:49:50 UTC. -- Jan 13 15:35:13 ivy mongos[27723]: 2019-01-13T15:35:13.344+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] connected connection! Jan 13 15:35:13 ivy mongos[27723]: 2019-01-13T15:35:13.344+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:13 ivy mongos[27723]: 2019-01-13T15:35:13.449+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:13 ivy mongos[27723]: 2019-01-13T15:35:13.449+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Successfully connected to albert.node.gce-europe-west3.admiral:27017 (1 connections now open to albert.node.gce-europe-west3.admiral:27017 with a 5 second timeout) Jan 13 15:35:13 ivy mongos[27723]: 2019-01-13T15:35:13.449+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:13 ivy mongos[27723]: 2019-01-13T15:35:13.554+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:13 ivy mongos[27723]: 2019-01-13T15:35:13.554+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host albert.node.gce-europe-west3.admiral:27017 based on ismaster reply: { hosts: [ "albert.node.gce-europe-west3.admiral:27017", "jordan.node.gce-europe-west1.admiral:27017" ], arbiters: [ "garry.node.gce-europe-west2.admiral:27017" ], setName: "sessions_gce_europe_west3", setVersion: 6, ismaster: true, secondary: false, primary: "albert.node.gce-europe-west3.admiral:27017", me: "albert.node.gce-europe-west3.admiral:27017", electionId: ObjectId('7fffffff000000000000000a'), lastWrite: { opTime: { ts: Timestamp(1547393713, 539), t: 10 }, lastWriteDate: new Date(1547393713000), majorityOpTime: { ts: Timestamp(1547393713, 516), t: 10 }, majorityWriteDate: new Date(1547393713000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393713496), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393713, 539), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff000000000000000a') }, lastCommittedOpTime: Timestamp(1547393713, 516), $configServerState: { opTime: { ts: Timestamp(1547393713, 130), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393713, 539), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:13 ivy mongos[27723]: 2019-01-13T15:35:13.554+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating albert.node.gce-europe-west3.admiral:27017 lastWriteDate to 2019-01-13T15:35:13.000+0000 Jan 13 15:35:13 ivy mongos[27723]: 2019-01-13T15:35:13.554+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating albert.node.gce-europe-west3.admiral:27017 opTime to { ts: Timestamp(1547393713, 539), t: 10 } Jan 13 15:35:13 ivy mongos[27723]: 2019-01-13T15:35:13.554+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] creating new connection to:jordan.node.gce-europe-west1.admiral:27017 Jan 13 15:35:13 ivy mongos[27723]: 2019-01-13T15:35:13.784+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] connected to server jordan.node.gce-europe-west1.admiral:27017 Jan 13 15:35:13 ivy mongos[27723]: 2019-01-13T15:35:13.784+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting client-side compression negotiation Jan 13 15:35:13 ivy mongos[27723]: 2019-01-13T15:35:13.784+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Offering snappy compressor to server Jan 13 15:35:13 ivy mongos[27723]: 2019-01-13T15:35:13.884+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Finishing client-side compression negotiation Jan 13 15:35:13 ivy mongos[27723]: 2019-01-13T15:35:13.884+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Received message compressors from server Jan 13 15:35:13 ivy mongos[27723]: 2019-01-13T15:35:13.884+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Adding compressor snappy Jan 13 15:35:13 ivy mongos[27723]: 2019-01-13T15:35:13.884+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] connected connection! Jan 13 15:35:13 ivy mongos[27723]: 2019-01-13T15:35:13.884+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:13 ivy mongos[27723]: 2019-01-13T15:35:13.983+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:13 ivy mongos[27723]: 2019-01-13T15:35:13.983+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Successfully connected to jordan.node.gce-europe-west1.admiral:27017 (1 connections now open to jordan.node.gce-europe-west1.admiral:27017 with a 5 second timeout) Jan 13 15:35:13 ivy mongos[27723]: 2019-01-13T15:35:13.983+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.082+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.082+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host jordan.node.gce-europe-west1.admiral:27017 based on ismaster reply: { hosts: [ "albert.node.gce-europe-west3.admiral:27017", "jordan.node.gce-europe-west1.admiral:27017" ], arbiters: [ "garry.node.gce-europe-west2.admiral:27017" ], setName: "sessions_gce_europe_west3", setVersion: 6, ismaster: false, secondary: true, primary: "albert.node.gce-europe-west3.admiral:27017", me: "jordan.node.gce-europe-west1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393713, 1220), t: 10 }, lastWriteDate: new Date(1547393713000), majorityOpTime: { ts: Timestamp(1547393713, 1219), t: 10 }, majorityWriteDate: new Date(1547393713000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393714027), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393713, 1220), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000009') }, lastCommittedOpTime: Timestamp(1547393713, 1219), $configServerState: { opTime: { ts: Timestamp(1547393707, 100), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393714, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.082+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jordan.node.gce-europe-west1.admiral:27017 lastWriteDate to 2019-01-13T15:35:13.000+0000 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.082+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jordan.node.gce-europe-west1.admiral:27017 opTime to { ts: Timestamp(1547393713, 1220), t: 10 } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.082+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_europe_west3 took 1099 msec Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.082+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_east1_2 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.082+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] creating new connection to:april.node.gce-us-east1.admiral:27017 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.163+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] connected to server april.node.gce-us-east1.admiral:27017 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.163+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting client-side compression negotiation Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.163+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Offering snappy compressor to server Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.199+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Finishing client-side compression negotiation Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.199+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Received message compressors from server Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.199+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Adding compressor snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.199+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] connected connection! Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.199+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.223+0000 I NETWORK [listener] connection accepted from 127.0.0.1:27567 #30 (1 connection now open) Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.223+0000 D EXECUTOR [listener] Starting new executor thread in passthrough mode Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.224+0000 D SHARDING [conn30] Command begin db: admin msg id: 1 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.225+0000 D SHARDING [conn30] Command end db: admin msg id: 1 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.225+0000 I COMMAND [conn30] command admin.$cmd command: getnonce { getnonce: 1, $db: "admin" } numYields:0 reslen:206 protocol:op_query 0ms Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.225+0000 D SHARDING [conn30] Command begin db: admin msg id: 3 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.225+0000 D NETWORK [conn30] Starting server-side compression negotiation Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.225+0000 D NETWORK [conn30] Compression negotiation not requested by client Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.225+0000 D SHARDING [conn30] Command end db: admin msg id: 3 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.225+0000 I COMMAND [conn30] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.226+0000 D SHARDING [conn30] Command begin db: admin msg id: 5 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.226+0000 D SHARDING [conn30] Command end db: admin msg id: 5 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.226+0000 I COMMAND [conn30] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:178 protocol:op_query 0ms Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.228+0000 D SHARDING [conn30] Command begin db: admin msg id: 7 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.229+0000 D SHARDING [conn30] Command end db: admin msg id: 7 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.229+0000 I COMMAND [conn30] query admin.1 command: { buildInfo: "1", $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:1340 0ms Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.229+0000 D SHARDING [conn30] Command begin db: admin msg id: 9 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.229+0000 D NETWORK [conn30] Starting server-side compression negotiation Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.229+0000 D NETWORK [conn30] Compression negotiation not requested by client Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.229+0000 D SHARDING [conn30] Command end db: admin msg id: 9 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.229+0000 I COMMAND [conn30] command admin.$cmd command: isMaster { isMaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.231+0000 D SHARDING [conn30] Command begin db: admin msg id: 11 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.231+0000 D SHARDING [conn30] Command end db: admin msg id: 11 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.231+0000 I COMMAND [conn30] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $db: "admin" } numYields:0 reslen:10255 protocol:op_query 0ms Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.233+0000 D SHARDING [conn30] Command begin db: config msg id: 13 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.233+0000 D EXECUTOR [conn30] Scheduling remote command request: RemoteCommand 16 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.233+0000 D ASIO [conn30] startCommand: RemoteCommand 16 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.233+0000 I ASIO [TaskExecutorPool-0] Connecting to ira.node.gce-us-east1.admiral:27019 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.233+0000 D ASIO [TaskExecutorPool-0] Finished connection setup. Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.236+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.236+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Successfully connected to april.node.gce-us-east1.admiral:27017 (1 connections now open to april.node.gce-us-east1.admiral:27017 with a 5 second timeout) Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.236+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.272+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.272+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] creating new connection to:queen.node.gce-us-east1.admiral:27017 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.339+0000 D NETWORK [TaskExecutorPool-0] Starting client-side compression negotiation Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.339+0000 D NETWORK [TaskExecutorPool-0] Offering snappy compressor to server Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.339+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.362+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] connected to server queen.node.gce-us-east1.admiral:27017 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.362+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting client-side compression negotiation Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.362+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Offering snappy compressor to server Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.378+0000 D NETWORK [TaskExecutorPool-0] Finishing client-side compression negotiation Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.378+0000 D NETWORK [TaskExecutorPool-0] Received message compressors from server Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.378+0000 D NETWORK [TaskExecutorPool-0] Adding compressor snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.378+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.378+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.378+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.378+0000 D NETWORK [conn30] Compressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.398+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Finishing client-side compression negotiation Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.398+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Received message compressors from server Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.398+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Adding compressor snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.398+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] connected connection! Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.398+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.414+0000 D NETWORK [conn30] Decompressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.414+0000 D ASIO [conn30] Request 16 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393714, 7), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393714, 7), $clusterTime: { clusterTime: Timestamp(1547393714, 386), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.414+0000 D EXECUTOR [conn30] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393714, 7), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393714, 7), $clusterTime: { clusterTime: Timestamp(1547393714, 386), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.415+0000 D SHARDING [conn30] Command end db: config msg id: 13 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.415+0000 I COMMAND [conn30] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 182ms Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.415+0000 D SHARDING [conn30] Command begin db: config msg id: 15 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.416+0000 D TRACKING [conn30] Cmd: aggregate, TrackingId: 5c3b5ab2a1824195fadc0fa2 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.416+0000 D EXECUTOR [conn30] Scheduling remote command request: RemoteCommand 17 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.416+0000 D ASIO [conn30] startCommand: RemoteCommand 17 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.416+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.416+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.416+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.416+0000 D NETWORK [conn30] Compressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.434+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.434+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Successfully connected to queen.node.gce-us-east1.admiral:27017 (1 connections now open to queen.node.gce-us-east1.admiral:27017 with a 5 second timeout) Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.434+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.471+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.471+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host april.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: false, secondary: true, primary: "queen.node.gce-us-east1.admiral:27017", me: "april.node.gce-us-east1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393714, 297), t: 3 }, lastWriteDate: new Date(1547393714000), majorityOpTime: { ts: Timestamp(1547393714, 161), t: 3 }, majorityWriteDate: new Date(1547393714000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393714249), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393714, 297), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393714, 161), $configServerState: { opTime: { ts: Timestamp(1547393700, 33), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393714, 302), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.471+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating april.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:35:14.000+0000 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.471+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating april.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393714, 297), t: 3 } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.471+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host queen.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: true, secondary: false, primary: "queen.node.gce-us-east1.admiral:27017", me: "queen.node.gce-us-east1.admiral:27017", electionId: ObjectId('7fffffff0000000000000003'), lastWrite: { opTime: { ts: Timestamp(1547393714, 529), t: 3 }, lastWriteDate: new Date(1547393714000), majorityOpTime: { ts: Timestamp(1547393714, 374), t: 3 }, majorityWriteDate: new Date(1547393714000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393714451), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393714, 529), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000003') }, lastCommittedOpTime: Timestamp(1547393714, 374), $configServerState: { opTime: { ts: Timestamp(1547393714, 7), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393714, 529), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.471+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating queen.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:35:14.000+0000 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.471+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating queen.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393714, 529), t: 3 } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.471+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] creating new connection to:ralph.node.gce-us-central1.admiral:27017 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.486+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.486+0000 D ASIO [ShardRegistry] Request 17 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393714, 7), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393714, 7), t: 1 }, lastOpVisible: { ts: Timestamp(1547393714, 7), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393710, 109), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393714, 7), $clusterTime: { clusterTime: Timestamp(1547393714, 517), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.486+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393714, 7), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393714, 7), t: 1 }, lastOpVisible: { ts: Timestamp(1547393714, 7), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393710, 109), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393714, 7), $clusterTime: { clusterTime: Timestamp(1547393714, 517), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.486+0000 D SHARDING [conn30] Command end db: config msg id: 15 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.486+0000 I COMMAND [conn30] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 71ms Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.487+0000 D SHARDING [conn30] Command begin db: config msg id: 17 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.487+0000 D EXECUTOR [conn30] Scheduling remote command request: RemoteCommand 18 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.487+0000 D ASIO [conn30] startCommand: RemoteCommand 18 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.487+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.487+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.487+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.487+0000 D NETWORK [conn30] Compressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.524+0000 D NETWORK [conn30] Decompressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.524+0000 D ASIO [conn30] Request 18 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393714, 7), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393714, 7), $clusterTime: { clusterTime: Timestamp(1547393714, 517), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.524+0000 D EXECUTOR [conn30] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393714, 7), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393714, 7), $clusterTime: { clusterTime: Timestamp(1547393714, 517), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.524+0000 D SHARDING [conn30] Command end db: config msg id: 17 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.524+0000 I COMMAND [conn30] query config.settings command: { find: "settings", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:315 37ms Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.525+0000 D SHARDING [conn30] Command begin db: config msg id: 19 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.525+0000 D TRACKING [conn30] Cmd: aggregate, TrackingId: 5c3b5ab2a1824195fadc0fa5 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.525+0000 D EXECUTOR [conn30] Scheduling remote command request: RemoteCommand 19 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393114524) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.525+0000 D ASIO [conn30] startCommand: RemoteCommand 19 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393114524) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.525+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.525+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.525+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.525+0000 D NETWORK [conn30] Compressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.547+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] connected to server ralph.node.gce-us-central1.admiral:27017 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.547+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting client-side compression negotiation Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.547+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Offering snappy compressor to server Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.547+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Finishing client-side compression negotiation Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.547+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Received message compressors from server Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.547+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Adding compressor snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.547+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] connected connection! Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.547+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.547+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.548+0000 I NETWORK [ReplicaSetMonitor-TaskExecutor] Successfully connected to ralph.node.gce-us-central1.admiral:27017 (1 connections now open to ralph.node.gce-us-central1.admiral:27017 with a 5 second timeout) Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.548+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.548+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.548+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host ralph.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: false, secondary: true, primary: "queen.node.gce-us-east1.admiral:27017", me: "ralph.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393714, 614), t: 3 }, lastWriteDate: new Date(1547393714000), majorityOpTime: { ts: Timestamp(1547393714, 442), t: 3 }, majorityWriteDate: new Date(1547393714000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393714543), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393714, 614), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393714, 442), $configServerState: { opTime: { ts: Timestamp(1547393707, 162), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393714, 627), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.548+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ralph.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:35:14.000+0000 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.548+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ralph.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393714, 614), t: 3 } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.548+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_east1_2 took 466 msec Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.586+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.586+0000 D ASIO [ShardRegistry] Request 19 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393714, 7), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393714, 7), t: 1 }, lastOpVisible: { ts: Timestamp(1547393714, 7), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393710, 109), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393714, 7), $clusterTime: { clusterTime: Timestamp(1547393714, 688), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.586+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393714, 7), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393714, 7), t: 1 }, lastOpVisible: { ts: Timestamp(1547393714, 7), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393710, 109), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393714, 7), $clusterTime: { clusterTime: Timestamp(1547393714, 688), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.586+0000 D SHARDING [conn30] Command end db: config msg id: 19 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.586+0000 I COMMAND [conn30] query config.changelog command: { aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393114524) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:245 61ms Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.587+0000 D SHARDING [conn30] Command begin db: config msg id: 21 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.587+0000 D EXECUTOR [conn30] Scheduling remote command request: RemoteCommand 20 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.587+0000 D ASIO [conn30] startCommand: RemoteCommand 20 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.587+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.587+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.587+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.587+0000 D NETWORK [conn30] Compressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.623+0000 D NETWORK [conn30] Decompressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.624+0000 D ASIO [conn30] Request 20 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393714, 7), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393714, 7), $clusterTime: { clusterTime: Timestamp(1547393714, 798), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.624+0000 D EXECUTOR [conn30] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393714, 7), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393714, 7), $clusterTime: { clusterTime: Timestamp(1547393714, 798), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.624+0000 D SHARDING [conn30] Command end db: config msg id: 21 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.624+0000 I COMMAND [conn30] query config.shards command: { find: "shards", filter: {}, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:1834 36ms Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.624+0000 D SHARDING [conn30] Command begin db: config msg id: 23 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.624+0000 D EXECUTOR [conn30] Scheduling remote command request: RemoteCommand 21 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.624+0000 D ASIO [conn30] startCommand: RemoteCommand 21 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.624+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.624+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.624+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.624+0000 D NETWORK [conn30] Compressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.661+0000 D NETWORK [conn30] Decompressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.661+0000 D ASIO [conn30] Request 21 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393714, 822), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393714, 7), $clusterTime: { clusterTime: Timestamp(1547393714, 822), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.661+0000 D EXECUTOR [conn30] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393714, 822), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393714, 7), $clusterTime: { clusterTime: Timestamp(1547393714, 822), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.662+0000 D SHARDING [conn30] Command end db: config msg id: 23 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.662+0000 I COMMAND [conn30] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 37ms Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.664+0000 D SHARDING [conn30] Command begin db: config msg id: 25 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.664+0000 D TRACKING [conn30] Cmd: aggregate, TrackingId: 5c3b5ab2a1824195fadc0fa9 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.664+0000 D EXECUTOR [conn30] Scheduling remote command request: RemoteCommand 22 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.664+0000 D ASIO [conn30] startCommand: RemoteCommand 22 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.664+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.664+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.664+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.664+0000 D NETWORK [conn30] Compressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.755+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.755+0000 D ASIO [ShardRegistry] Request 22 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393714, 822), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393714, 7), t: 1 }, lastOpVisible: { ts: Timestamp(1547393714, 7), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393710, 109), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393714, 7), $clusterTime: { clusterTime: Timestamp(1547393714, 868), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.755+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393714, 822), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393714, 7), t: 1 }, lastOpVisible: { ts: Timestamp(1547393714, 7), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393710, 109), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393714, 7), $clusterTime: { clusterTime: Timestamp(1547393714, 868), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.755+0000 D SHARDING [conn30] Command end db: config msg id: 25 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.755+0000 I COMMAND [conn30] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 91ms Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.757+0000 D SHARDING [conn30] Command begin db: config msg id: 27 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.757+0000 D TRACKING [conn30] Cmd: aggregate, TrackingId: 5c3b5ab2a1824195fadc0fab Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.757+0000 D EXECUTOR [conn30] Scheduling remote command request: RemoteCommand 23 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.757+0000 D ASIO [conn30] startCommand: RemoteCommand 23 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.758+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.758+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.758+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.758+0000 D NETWORK [conn30] Compressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.794+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.794+0000 D ASIO [ShardRegistry] Request 23 finished with response: { cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393714, 822), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393714, 822), t: 1 }, lastOpVisible: { ts: Timestamp(1547393714, 822), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393710, 109), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393714, 822), $clusterTime: { clusterTime: Timestamp(1547393714, 898), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.794+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393714, 822), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393714, 822), t: 1 }, lastOpVisible: { ts: Timestamp(1547393714, 822), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393710, 109), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393714, 822), $clusterTime: { clusterTime: Timestamp(1547393714, 898), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.794+0000 D SHARDING [conn30] Command end db: config msg id: 27 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.794+0000 I COMMAND [conn30] query config.databases command: { aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:270 36ms Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.795+0000 D SHARDING [conn30] Command begin db: config msg id: 29 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.795+0000 D EXECUTOR [conn30] Scheduling remote command request: RemoteCommand 24 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.795+0000 D ASIO [conn30] startCommand: RemoteCommand 24 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.795+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.795+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.795+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.795+0000 D NETWORK [conn30] Compressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.831+0000 D NETWORK [conn30] Decompressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.831+0000 D ASIO [conn30] Request 24 finished with response: { n: 3, ok: 1.0, operationTime: Timestamp(1547393714, 822), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393714, 822), $clusterTime: { clusterTime: Timestamp(1547393714, 1006), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.831+0000 D EXECUTOR [conn30] Received remote response: RemoteResponse -- cmd:{ n: 3, ok: 1.0, operationTime: Timestamp(1547393714, 822), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393714, 822), $clusterTime: { clusterTime: Timestamp(1547393714, 1006), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.832+0000 D SHARDING [conn30] Command end db: config msg id: 29 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.832+0000 I COMMAND [conn30] query config.collections command: { count: "collections", query: { dropped: false }, $db: "config" } numYields:0 reslen:210 36ms Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.832+0000 D SHARDING [conn30] Command begin db: config msg id: 31 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.832+0000 D EXECUTOR [conn30] Scheduling remote command request: RemoteCommand 25 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393114832) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.832+0000 D ASIO [conn30] startCommand: RemoteCommand 25 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393114832) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.832+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.832+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.832+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.832+0000 D NETWORK [conn30] Compressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.869+0000 D NETWORK [conn30] Decompressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.869+0000 D ASIO [conn30] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Request 25 finished with response: { cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393712174), up: 3486909, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393710418), up: 3433046, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393709381), up: 3486806, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393705443), up: 644, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393712728), up: 74658, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393707726), up: 74679, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393707792), up: 74653, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393705751), up: 74623, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393712392), up: 74630, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393713284), up: 74603, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.node.gce-us-east Jan 13 15:35:14 ivy mongos[27723]: 1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393710481), up: 74572, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393705098), up: 74595, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393709416), up: 74571, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393714015), up: 74550, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393706594), up: 74543, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393705376), up: 74487, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393709281), up: 74520, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393713078), up: 74524, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393714621), up: 74496, waiting: true }, { _id: "jacob:270 .......... 75093, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393706908), up: 75057, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393705271), up: 75092, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393705791), up: 75851, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", Jan 13 15:35:14 ivy mongos[27723]: "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393711877), up: 75917, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393711298), up: 75917, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393710402), up: 75856, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393713469), up: 76449, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393712769), up: 76448, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393709416), up: 76384, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393709382), up: 76235, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393709417), up: 76384, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393709383), up: 76172, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393713539), up: 76239, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393709785), up: 76173, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393709782), up: 76047, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393711279), up: 76112, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393711380) Jan 13 15:35:14 ivy mongos[27723]: , up: 76113, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393710130), up: 0, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393709384), up: 75986, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393708236), up: 76046, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393714, 822), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393714, 822), $clusterTime: { clusterTime: Timestamp(1547393714, 1006), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.869+0000 D EXECUTOR [conn30] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393712174), up: 3486909, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393710418), up: 3433046, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393709381), up: 3486806, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393705443), up: 644, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393712728), up: 74658, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393707726), up: 74679, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393707792), up: 74653, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393705751), up: 74623, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393712392), up: 74630, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393713284), up: 74603, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.no Jan 13 15:35:14 ivy mongos[27723]: de.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393710481), up: 74572, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393705098), up: 74595, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393709416), up: 74571, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393714015), up: 74550, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393706594), up: 74543, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393705376), up: 74487, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393709281), up: 74520, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393713078), up: 74524, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393714621), up: 74496, waiting: true }, { _ .......... 75093, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393706908), up: 75057, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393705271), up: 75092, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393705791), up: 75851, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", Jan 13 15:35:14 ivy mongos[27723]: "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393711877), up: 75917, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393711298), up: 75917, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393710402), up: 75856, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393713469), up: 76449, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393712769), up: 76448, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393709416), up: 76384, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393709382), up: 76235, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393709417), up: 76384, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393709383), up: 76172, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393713539), up: 76239, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393709785), up: 76173, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393709782), up: 76047, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393711279), up: 76112, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393711380) Jan 13 15:35:14 ivy mongos[27723]: , up: 76113, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393710130), up: 0, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393709384), up: 75986, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393708236), up: 76046, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393714, 822), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393714, 822), $clusterTime: { clusterTime: Timestamp(1547393714, 1006), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.870+0000 D SHARDING [conn30] Command end db: config msg id: 31 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.870+0000 I COMMAND [conn30] query config.mongos command: { find: "mongos", filter: { ping: { $gte: new Date(1547393114832) } }, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:63 reslen:9894 37ms Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.871+0000 D SHARDING [conn30] Command begin db: config msg id: 33 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.871+0000 D EXECUTOR [conn30] Scheduling remote command request: RemoteCommand 26 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.871+0000 D ASIO [conn30] startCommand: RemoteCommand 26 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.871+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.871+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.871+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.871+0000 D NETWORK [conn30] Compressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.907+0000 D NETWORK [conn30] Decompressing message with snappy Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.908+0000 D ASIO [conn30] Request 26 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393714, 822), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393714, 822), $clusterTime: { clusterTime: Timestamp(1547393714, 1109), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.908+0000 D EXECUTOR [conn30] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393714, 822), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393714, 822), $clusterTime: { clusterTime: Timestamp(1547393714, 1109), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.908+0000 D SHARDING [conn30] Command end db: config msg id: 33 Jan 13 15:35:14 ivy mongos[27723]: 2019-01-13T15:35:14.908+0000 I COMMAND [conn30] query config.locks command: { find: "locks", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:241 36ms Jan 13 15:35:16 ivy mongos[27723]: 2019-01-13T15:35:16.462+0000 I NETWORK [listener] connection accepted from 10.128.0.36:29715 #34 (2 connections now open) Jan 13 15:35:16 ivy mongos[27723]: 2019-01-13T15:35:16.462+0000 D EXECUTOR [listener] Starting new executor thread in passthrough mode Jan 13 15:35:16 ivy mongos[27723]: 2019-01-13T15:35:16.462+0000 D SHARDING [conn34] Command begin db: admin msg id: 13 Jan 13 15:35:16 ivy mongos[27723]: 2019-01-13T15:35:16.462+0000 I NETWORK [conn34] received client metadata from 10.128.0.36:29715 conn34: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.5" }, os: { type: "Linux", name: "CentOS Linux release 7.5.1804 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-862.11.6.el7.x86_64" } } Jan 13 15:35:16 ivy mongos[27723]: 2019-01-13T15:35:16.462+0000 D NETWORK [conn34] Starting server-side compression negotiation Jan 13 15:35:16 ivy mongos[27723]: 2019-01-13T15:35:16.462+0000 D NETWORK [conn34] Compression negotiation not requested by client Jan 13 15:35:16 ivy mongos[27723]: 2019-01-13T15:35:16.462+0000 D SHARDING [conn34] Command end db: admin msg id: 13 Jan 13 15:35:16 ivy mongos[27723]: 2019-01-13T15:35:16.462+0000 I COMMAND [conn34] command admin.$cmd appName: "MongoDB Shell" command: isMaster { isMaster: 1, client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.0.5" }, os: { type: "Linux", name: "CentOS Linux release 7.5.1804 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-862.11.6.el7.x86_64" } }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:35:16 ivy mongos[27723]: 2019-01-13T15:35:16.463+0000 D SHARDING [conn34] Command begin db: admin msg id: 14 Jan 13 15:35:16 ivy mongos[27723]: 2019-01-13T15:35:16.463+0000 D SHARDING [conn34] Command end db: admin msg id: 14 Jan 13 15:35:16 ivy mongos[27723]: 2019-01-13T15:35:16.463+0000 I COMMAND [conn34] command admin.$cmd appName: "MongoDB Shell" command: replSetGetStatus { replSetGetStatus: 1.0, forShell: 1.0, $clusterTime: { clusterTime: Timestamp(1547393407, 713), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:241 protocol:op_msg 0ms Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.534+0000 D SHARDING [conn34] Command begin db: visitor_api msg id: 15 Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.534+0000 D SH_REFR [conn34] Refreshing cached database entry for visitor_api; current cached database info is {} Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.534+0000 D EXECUTOR [ConfigServerCatalogCacheLoader-0] Executing a task on behalf of pool ConfigServerCatalogCacheLoader Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.534+0000 D TRACKING [ConfigServerCatalogCacheLoader-0] Cmd: NotSet, TrackingId: 5c3b5ab6a1824195fadc0fb3 Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.534+0000 D EXECUTOR [ConfigServerCatalogCacheLoader-0] Scheduling remote command request: RemoteCommand 27 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:35:48.534+0000 cmd:{ find: "databases", filter: { _id: "visitor_api" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393714, 822), t: 1 } }, maxTimeMS: 30000 } Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.534+0000 D ASIO [ConfigServerCatalogCacheLoader-0] startCommand: RemoteCommand 27 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:35:48.534+0000 cmd:{ find: "databases", filter: { _id: "visitor_api" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393714, 822), t: 1 } }, maxTimeMS: 30000 } Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.534+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.534+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.534+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.534+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.574+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.574+0000 D ASIO [ShardRegistry] Request 27 finished with response: { cursor: { firstBatch: [ { _id: "visitor_api", primary: "sessions_gce_us_central1", partitioned: true, version: { uuid: UUID("fe817fb8-8f72-4572-8122-63432db89ccc"), lastMod: 2 } } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393718, 550), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393718, 500), t: 1 }, lastOpVisible: { ts: Timestamp(1547393718, 500), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393718, 500), $clusterTime: { clusterTime: Timestamp(1547393718, 590), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.574+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "visitor_api", primary: "sessions_gce_us_central1", partitioned: true, version: { uuid: UUID("fe817fb8-8f72-4572-8122-63432db89ccc"), lastMod: 2 } } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393718, 550), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393718, 500), t: 1 }, lastOpVisible: { ts: Timestamp(1547393718, 500), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393718, 500), $clusterTime: { clusterTime: Timestamp(1547393718, 590), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.574+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.574+0000 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database visitor_api took 40 ms and found { _id: "visitor_api", primary: "sessions_gce_us_central1", partitioned: true, version: { uuid: UUID("fe817fb8-8f72-4572-8122-63432db89ccc"), lastMod: 2 } } Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.574+0000 D EXECUTOR [ConfigServerCatalogCacheLoader-0] Not reaping because the earliest retirement date is 2019-01-13T15:35:48.534+0000 Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.574+0000 D TRACKING [conn34] Cmd: explain, TrackingId: 5c3b5ab6a1824195fadc0fb2 Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.574+0000 D EXECUTOR [conn34] Scheduling remote command request: RemoteCommand 28 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:35:48.574+0000 cmd:{ find: "collections", filter: { _id: /^visitor_api\./ }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393718, 500), t: 1 } }, maxTimeMS: 30000 } Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.574+0000 D ASIO [conn34] startCommand: RemoteCommand 28 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:35:48.574+0000 cmd:{ find: "collections", filter: { _id: /^visitor_api\./ }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393718, 500), t: 1 } }, maxTimeMS: 30000 } Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.575+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.575+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.575+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.575+0000 D NETWORK [conn34] Compressing message with snappy Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.611+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.611+0000 D ASIO [ShardRegistry] Request 28 finished with response: { cursor: { firstBatch: [ { _id: "visitor_api.frequencies", lastmodEpoch: ObjectId('5c38442797b3fe009c0722df'), lastmod: new Date(4294967310), dropped: false, key: { r: 1.0, u: 1.0 }, unique: false, uuid: UUID("748197f9-223c-4528-97d9-e5e9e0a542b5") }, { _id: "visitor_api.sessions4", lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), lastmod: new Date(4294967310), dropped: false, key: { r: 1.0, u: 1.0 }, unique: false, uuid: UUID("9a42c447-0b18-4874-934f-d62940851043") } ], id: 0, ns: "config.collections" }, ok: 1.0, operationTime: Timestamp(1547393718, 500), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393718, 500), t: 1 }, lastOpVisible: { ts: Timestamp(1547393718, 500), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393710, 109), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393718, 500), $clusterTime: { clusterTime: Timestamp(1547393718, 697), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.611+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "visitor_api.frequencies", lastmodEpoch: ObjectId('5c38442797b3fe009c0722df'), lastmod: new Date(4294967310), dropped: false, key: { r: 1.0, u: 1.0 }, unique: false, uuid: UUID("748197f9-223c-4528-97d9-e5e9e0a542b5") }, { _id: "visitor_api.sessions4", lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), lastmod: new Date(4294967310), dropped: false, key: { r: 1.0, u: 1.0 }, unique: false, uuid: UUID("9a42c447-0b18-4874-934f-d62940851043") } ], id: 0, ns: "config.collections" }, ok: 1.0, operationTime: Timestamp(1547393718, 500), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393718, 500), t: 1 }, lastOpVisible: { ts: Timestamp(1547393718, 500), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393710, 109), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393718, 500), $clusterTime: { clusterTime: Timestamp(1547393718, 697), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.611+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.611+0000 D SH_REFR [conn34] Refreshing chunks for collection visitor_api.sessions4 based on version 0|0||000000000000000000000000 Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.612+0000 D EXECUTOR [ConfigServerCatalogCacheLoader-0] Executing a task on behalf of pool ConfigServerCatalogCacheLoader Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.612+0000 D TRACKING [ConfigServerCatalogCacheLoader-0] Cmd: NotSet, TrackingId: 5c3b5ab6a1824195fadc0fb6 Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.612+0000 D EXECUTOR [ConfigServerCatalogCacheLoader-0] Scheduling remote command request: RemoteCommand 29 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:35:48.612+0000 cmd:{ find: "collections", filter: { _id: "visitor_api.sessions4" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393718, 500), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.612+0000 D ASIO [ConfigServerCatalogCacheLoader-0] startCommand: RemoteCommand 29 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:35:48.612+0000 cmd:{ find: "collections", filter: { _id: "visitor_api.sessions4" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393718, 500), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.612+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.612+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.612+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.612+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.650+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.650+0000 D ASIO [ShardRegistry] Request 29 finished with response: { cursor: { firstBatch: [ { _id: "visitor_api.sessions4", lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), lastmod: new Date(4294967310), dropped: false, key: { r: 1.0, u: 1.0 }, unique: false, uuid: UUID("9a42c447-0b18-4874-934f-d62940851043") } ], id: 0, ns: "config.collections" }, ok: 1.0, operationTime: Timestamp(1547393718, 697), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393718, 500), t: 1 }, lastOpVisible: { ts: Timestamp(1547393718, 500), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393718, 500), $clusterTime: { clusterTime: Timestamp(1547393718, 697), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.650+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "visitor_api.sessions4", lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), lastmod: new Date(4294967310), dropped: false, key: { r: 1.0, u: 1.0 }, unique: false, uuid: UUID("9a42c447-0b18-4874-934f-d62940851043") } ], id: 0, ns: "config.collections" }, ok: 1.0, operationTime: Timestamp(1547393718, 697), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393718, 500), t: 1 }, lastOpVisible: { ts: Timestamp(1547393718, 500), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393718, 500), $clusterTime: { clusterTime: Timestamp(1547393718, 697), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.650+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.651+0000 D EXECUTOR [ConfigServerCatalogCacheLoader-0] Scheduling remote command request: RemoteCommand 30 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:35:48.651+0000 cmd:{ find: "chunks", filter: { ns: "visitor_api.sessions4", lastmod: { $gte: Timestamp(0, 0) } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393718, 500), t: 1 } }, maxTimeMS: 30000 } Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.651+0000 D ASIO [ConfigServerCatalogCacheLoader-0] startCommand: RemoteCommand 30 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:35:48.651+0000 cmd:{ find: "chunks", filter: { ns: "visitor_api.sessions4", lastmod: { $gte: Timestamp(0, 0) } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393718, 500), t: 1 } }, maxTimeMS: 30000 } Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.651+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.651+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.651+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.651+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.690+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.692+0000 D ASIO [ShardRegistry] warning: log line attempted (43kB) over max size (10kB), printing beginning and end ... Request 30 finished with response: { cursor: { firstBatch: [ { _id: "visitor_api.sessions4-r_MinKeyu_MinKey", ns: "visitor_api.sessions4", min: { r: MinKey, u: MinKey }, max: { r: "gce-europe-west1", u: MinKey }, shard: "sessions_gce_europe_west1", lastmod: Timestamp(1, 0), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_europe_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-europe-west1"u_MaxKey", ns: "visitor_api.sessions4", min: { r: "gce-europe-west1", u: MaxKey }, max: { r: "gce-europe-west2", u: MinKey }, shard: "sessions_gce_europe_west2", lastmod: Timestamp(1, 2), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_europe_west2" } ] }, { _id: "visitor_api.sessions4-r_"gce-europe-west2"u_MaxKey", ns: "visitor_api.sessions4", min: { r: "gce-europe-west2", u: MaxKey }, max: { r: "gce-europe-west3", u: MinKey }, shard: "sessions_gce_europe_west3", lastmod: Timestamp(1, 4), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_europe_west3" } ] }, { _id: "visitor_api.sessions4-r_"gce-europe-west3"u_MaxKey", ns: "visitor_api.sessions4", min: { r: "gce-europe-west3", u: MaxKey }, max: { r: "gce-us-central1", u: MinKey }, shard: "sessions_gce_us_central1", lastmod: Timestamp(1, 6), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_central1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-east1"u_MaxKey", ns: "visitor_api.sessions4", min: { r: "gce-us-east1", u: MaxKey }, max: { r: "gce-us-west1", u: MinKey }, shard: "sessions_gce_us_west1", lastmod: Timestamp(1, 10), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), history: [ { validAfter: Timestamp(1543523371, 5), shard: "sess Jan 13 15:35:18 ivy mongos[27723]: ions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_MaxKey", ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: MaxKey }, max: { r: "staging-gce-us-east1", u: MinKey }, shard: "sessions_gce_europe_west1", lastmod: Timestamp(1, 12), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_europe_west1" } ] }, { _id: "visitor_api.sessions4-r_"staging-gce-us-east1"u_MinKey", ns: "visitor_api.sessions4", min: { r: "staging-gce-us-east1", u: MinKey }, max: { r: "staging-gce-us-east1", u: MaxKey }, shard: "sessions_gce_us_east1", lastmod: Timestamp(1, 13), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_east1" } ] }, { _id: "visitor_api.sessions4-r_"staging-gce-us-east1"u_MaxKey", ns: "visitor_api.sessions4", min: { r: "staging-gce-us-east1", u: MaxKey }, max: { r: MaxKey, u: MaxKey }, shard: "sessions_gce_europe_west2", lastmod: Timestamp(1, 14), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_europe_west2" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"B1r3g2DLb/BiTZ1bLp3D2Q=="", lastmod: Timestamp(1, 383), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "B1r3g2DLb/BiTZ1bLp3D2Q==" }, max: { r: "gce-us-west1", u: "B4cTQgWdXzsqQF/6u4XzFA==" }, shard: "sessions_gce_us_west1", .......... u: "TbAaGRfnJ6xOU8bVdqv8lw==" }, max: { r: "gce-us-west1", u: "TcEDngJ65YOc1ptZNwPZ+A==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"X+01XCI+KFc/GHJDYLo4yA=="", lastmod: Timestamp(1, 1627), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "X+01XCI+KFc/GHJDYLo4yA==" }, max: { r: "gce-us-west1", u: "X/+eokepUI2bxji02Tu0IA==" }, shard: " Jan 13 15:35:18 ivy mongos[27723]: sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"BXH0KkA1V+ki7BmRM32cQw=="", lastmod: Timestamp(1, 1630), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "BXH0KkA1V+ki7BmRM32cQw==" }, max: { r: "gce-us-west1", u: "BYWILhxzMPSG5DW/hLWcKQ==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"T7bgS0wPgFECFvhYe4BIcQ=="", lastmod: Timestamp(1, 1633), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "T7bgS0wPgFECFvhYe4BIcQ==" }, max: { r: "gce-us-west1", u: "T8eJ2SGLxFcEcCTwqcDmWQ==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"ciPQDvK4UojtdRx3iKnNKw=="", lastmod: Timestamp(1, 1636), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "ciPQDvK4UojtdRx3iKnNKw==" }, max: { r: "gce-us-west1", u: "cjaae7aoyd2QVVHs+vBMjg==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"Zt5kQ8S38qC7EIKpSTd1/A=="", lastmod: Timestamp(1, 1639), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "Zt5kQ8S38qC7EIKpSTd1/A==" }, max: { r: "gce-us-west1", u: "Zu41idSnyclxHJJmEI+e6w==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-central1"u_"PRgJnECHVZpaTpxat2Q0gA=="", lastmod: Timestamp(1, 1642), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-central1", u: "PRgJnECH Jan 13 15:35:18 ivy mongos[27723]: VZpaTpxat2Q0gA==" }, max: { r: "gce-us-central1", u: "PTJjZsIwZA+aZQuSKaZl1w==" }, shard: "sessions_gce_us_central1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_central1" } ] } ], id: 22400095454, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393718, 697), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393718, 500), t: 1 }, lastOpVisible: { ts: Timestamp(1547393718, 500), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393718, 500), $clusterTime: { clusterTime: Timestamp(1547393718, 697), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.692+0000 D EXECUTOR [ShardRegistry] warning: log line attempted (43kB) over max size (10kB), printing beginning and end ... Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "visitor_api.sessions4-r_MinKeyu_MinKey", ns: "visitor_api.sessions4", min: { r: MinKey, u: MinKey }, max: { r: "gce-europe-west1", u: MinKey }, shard: "sessions_gce_europe_west1", lastmod: Timestamp(1, 0), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_europe_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-europe-west1"u_MaxKey", ns: "visitor_api.sessions4", min: { r: "gce-europe-west1", u: MaxKey }, max: { r: "gce-europe-west2", u: MinKey }, shard: "sessions_gce_europe_west2", lastmod: Timestamp(1, 2), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_europe_west2" } ] }, { _id: "visitor_api.sessions4-r_"gce-europe-west2"u_MaxKey", ns: "visitor_api.sessions4", min: { r: "gce-europe-west2", u: MaxKey }, max: { r: "gce-europe-west3", u: MinKey }, shard: "sessions_gce_europe_west3", lastmod: Timestamp(1, 4), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_europe_west3" } ] }, { _id: "visitor_api.sessions4-r_"gce-europe-west3"u_MaxKey", ns: "visitor_api.sessions4", min: { r: "gce-europe-west3", u: MaxKey }, max: { r: "gce-us-central1", u: MinKey }, shard: "sessions_gce_us_central1", lastmod: Timestamp(1, 6), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_central1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-east1"u_MaxKey", ns: "visitor_api.sessions4", min: { r: "gce-us-east1", u: MaxKey }, max: { r: "gce-us-west1", u: MinKey }, shard: "sessions_gce_us_west1", lastmod: Timestamp(1, 10), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), history: [ { validAfter: Timestamp(1543523371, 5) Jan 13 15:35:18 ivy mongos[27723]: , shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_MaxKey", ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: MaxKey }, max: { r: "staging-gce-us-east1", u: MinKey }, shard: "sessions_gce_europe_west1", lastmod: Timestamp(1, 12), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_europe_west1" } ] }, { _id: "visitor_api.sessions4-r_"staging-gce-us-east1"u_MinKey", ns: "visitor_api.sessions4", min: { r: "staging-gce-us-east1", u: MinKey }, max: { r: "staging-gce-us-east1", u: MaxKey }, shard: "sessions_gce_us_east1", lastmod: Timestamp(1, 13), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_east1" } ] }, { _id: "visitor_api.sessions4-r_"staging-gce-us-east1"u_MaxKey", ns: "visitor_api.sessions4", min: { r: "staging-gce-us-east1", u: MaxKey }, max: { r: MaxKey, u: MaxKey }, shard: "sessions_gce_europe_west2", lastmod: Timestamp(1, 14), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_europe_west2" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"B1r3g2DLb/BiTZ1bLp3D2Q=="", lastmod: Timestamp(1, 383), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "B1r3g2DLb/BiTZ1bLp3D2Q==" }, max: { r: "gce-us-west1", u: "B4cTQgWdXzsqQF/6u4XzFA==" }, shard: "sessions_ .......... u: "TbAaGRfnJ6xOU8bVdqv8lw==" }, max: { r: "gce-us-west1", u: "TcEDngJ65YOc1ptZNwPZ+A==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"X+01XCI+KFc/GHJDYLo4yA=="", lastmod: Timestamp(1, 1627), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "X+01XCI+KFc/GHJDYLo4yA==" }, max: { r: "gce-us-west1", u: "X/+eokepUI2bxji02Tu0IA==" }, shard: " Jan 13 15:35:18 ivy mongos[27723]: sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"BXH0KkA1V+ki7BmRM32cQw=="", lastmod: Timestamp(1, 1630), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "BXH0KkA1V+ki7BmRM32cQw==" }, max: { r: "gce-us-west1", u: "BYWILhxzMPSG5DW/hLWcKQ==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"T7bgS0wPgFECFvhYe4BIcQ=="", lastmod: Timestamp(1, 1633), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "T7bgS0wPgFECFvhYe4BIcQ==" }, max: { r: "gce-us-west1", u: "T8eJ2SGLxFcEcCTwqcDmWQ==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"ciPQDvK4UojtdRx3iKnNKw=="", lastmod: Timestamp(1, 1636), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "ciPQDvK4UojtdRx3iKnNKw==" }, max: { r: "gce-us-west1", u: "cjaae7aoyd2QVVHs+vBMjg==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"Zt5kQ8S38qC7EIKpSTd1/A=="", lastmod: Timestamp(1, 1639), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "Zt5kQ8S38qC7EIKpSTd1/A==" }, max: { r: "gce-us-west1", u: "Zu41idSnyclxHJJmEI+e6w==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-central1"u_"PRgJnECHVZpaTpxat2Q0gA=="", lastmod: Timestamp(1, 1642), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-central1", u: "PRgJnECH Jan 13 15:35:18 ivy mongos[27723]: VZpaTpxat2Q0gA==" }, max: { r: "gce-us-central1", u: "PTJjZsIwZA+aZQuSKaZl1w==" }, shard: "sessions_gce_us_central1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_central1" } ] } ], id: 22400095454, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393718, 697), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393718, 500), t: 1 }, lastOpVisible: { ts: Timestamp(1547393718, 500), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393718, 500), $clusterTime: { clusterTime: Timestamp(1547393718, 697), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.692+0000 D EXECUTOR [ShardRegistry] Scheduling remote command request: RemoteCommand 31 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:35:48.692+0000 cmd:{ getMore: 22400095454, collection: "chunks" } Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.693+0000 D ASIO [ShardRegistry] startCommand: RemoteCommand 31 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:35:48.692+0000 cmd:{ getMore: 22400095454, collection: "chunks" } Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.693+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.693+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.693+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.693+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.693+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:18 ivy mongos[27723]: 2019-01-13T15:35:18.961+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.018+0000 D ASIO [ShardRegistry] warning: log line attempted (5368kB) over max size (10kB), printing beginning and end ... Request 31 finished with response: { cursor: { nextBatch: [ { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"+uHL6L14HsDhZLQdtIF+yw=="", lastmod: Timestamp(1, 1645), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "+uHL6L14HsDhZLQdtIF+yw==" }, max: { r: "gce-us-west1", u: "+vEIX2tS2aBJwLBW06MvBA==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"WWHiSmqG9W09RWw0fzob5g=="", lastmod: Timestamp(1, 1648), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "WWHiSmqG9W09RWw0fzob5g==" }, max: { r: "gce-us-west1", u: "WXPcdl74CpKVQLhHnqlY/w==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"HjgDRp8ACaM8p9nTDKvkyg=="", lastmod: Timestamp(1, 1651), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "HjgDRp8ACaM8p9nTDKvkyg==" }, max: { r: "gce-us-west1", u: "Hknm3L/sQwec4jc1CZLilA==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"58M3FVXTYZvcUQHEpkNmvw=="", lastmod: Timestamp(1, 1654), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "58M3FVXTYZvcUQHEpkNmvw==" }, max: { r: "gce-us-west1", u: "59PzpvDsF5jUtNPG0N6DIw==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"EuBhQkIa6+C7p5Q2ftbT7g=="", lastmod: Timestamp(1, 1657), lastmodEpoch: ObjectId('5c004c2b Jan 13 15:35:19 ivy mongos[27723]: f113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "EuBhQkIa6+C7p5Q2ftbT7g==" }, max: { r: "gce-us-west1", u: "EvDLGm5siZi4byflgyOoWw==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"jcJbbw41w5J8vVxrtq9U8A=="", lastmod: Timestamp(1, 1660), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "jcJbbw41w5J8vVxrtq9U8A==" }, max: { r: "gce-us-west1", u: "jdKwj/cR4A27FQ51ZPJa9A==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"CXum88GVvVTKpgycRid0FQ=="", lastmod: Timestamp(1, 1663), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "CXum88GVvVTKpgycRid0FQ==" }, max: { r: "gce-us-west1", u: "CZ13X44351P7jpg3/U/FQA==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"k3uf2STm69CtVLn00sRgjQ=="", lastmod: Timestamp(1, 1669), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "k3uf2STm69CtVLn00sRgjQ==" }, max: { r: "gce-us-west1", u: "k4sS9HCnKv43CzQevscGJw==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp .......... _api.sessions4", min: { r: "gce-us-east1", u: "UrWGskpdRgYmcZbevwHtNQ==" }, max: { r: "gce-us-east1", u: "Uswg5veRo2MiRh3ecM29Gw==" }, shard: "sessions_gce_us_east1_2", history: [ { validAfter: Timestamp(1546991981, 1285), shard: "sessions_gce_us_east1_2" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-east1"u_"Uswg5veRo2MiRh3ecM29Gw=="", lastmod: Timestamp(2072, 0), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-east1", u: "Uswg5veRo2MiRh3ecM29Gw==" }, max: { r: "gce-u Jan 13 15:35:19 ivy mongos[27723]: s-east1", u: "UtKkiETYYmyfPY+qcyq53w==" }, shard: "sessions_gce_us_east1_2", history: [ { validAfter: Timestamp(1546992107, 578), shard: "sessions_gce_us_east1_2" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-east1"u_"UtKkiETYYmyfPY+qcyq53w=="", lastmod: Timestamp(2073, 0), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-east1", u: "UtKkiETYYmyfPY+qcyq53w==" }, max: { r: "gce-us-east1", u: "UulyIA3t17zXdbwla8UcyA==" }, shard: "sessions_gce_us_east1_2", history: [ { validAfter: Timestamp(1546992387, 355), shard: "sessions_gce_us_east1_2" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-east1"u_"UulyIA3t17zXdbwla8UcyA=="", lastmod: Timestamp(2074, 0), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-east1", u: "UulyIA3t17zXdbwla8UcyA==" }, max: { r: "gce-us-east1", u: "UwAbyt9D6Zo1qlfk0XptAQ==" }, shard: "sessions_gce_us_east1_2", history: [ { validAfter: Timestamp(1546992652, 172), shard: "sessions_gce_us_east1_2" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-east1"u_"UwAbyt9D6Zo1qlfk0XptAQ=="", lastmod: Timestamp(2075, 0), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-east1", u: "UwAbyt9D6Zo1qlfk0XptAQ==" }, max: { r: "gce-us-east1", u: "Uwa+Dw0ZrJLFb7mVj/GykQ==" }, shard: "sessions_gce_us_east1_2", history: [ { validAfter: Timestamp(1546992771, 293), shard: "sessions_gce_us_east1_2" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-east1"u_"Uwa+Dw0ZrJLFb7mVj/GykQ=="", lastmod: Timestamp(2076, 0), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-east1", u: "Uwa+Dw0ZrJLFb7mVj/GykQ==" }, max: { r: "gce-us-east1", u: "UxOVavlFZXtKRL5MnB+1uQ==" }, shard: "sessions_gce_us_east1_2", history: [ { validAfter: Timestamp(1546992957, 229), shard: "sessions_gce_us_east1_2" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-central1"u_MaxKey", lastmod: Timestamp(2076, 1), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visi Jan 13 15:35:19 ivy mongos[27723]: tor_api.sessions4", min: { r: "gce-us-central1", u: MaxKey }, max: { r: "gce-us-east1", u: MinKey }, shard: "sessions_gce_us_east1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_east1" } ] } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393718, 697), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393718, 550), t: 1 }, lastOpVisible: { ts: Timestamp(1547393718, 550), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393718, 550), $clusterTime: { clusterTime: Timestamp(1547393718, 735), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.062+0000 D EXECUTOR [ShardRegistry] warning: log line attempted (5368kB) over max size (10kB), printing beginning and end ... Received remote response: RemoteResponse -- cmd:{ cursor: { nextBatch: [ { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"+uHL6L14HsDhZLQdtIF+yw=="", lastmod: Timestamp(1, 1645), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "+uHL6L14HsDhZLQdtIF+yw==" }, max: { r: "gce-us-west1", u: "+vEIX2tS2aBJwLBW06MvBA==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"WWHiSmqG9W09RWw0fzob5g=="", lastmod: Timestamp(1, 1648), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "WWHiSmqG9W09RWw0fzob5g==" }, max: { r: "gce-us-west1", u: "WXPcdl74CpKVQLhHnqlY/w==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"HjgDRp8ACaM8p9nTDKvkyg=="", lastmod: Timestamp(1, 1651), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "HjgDRp8ACaM8p9nTDKvkyg==" }, max: { r: "gce-us-west1", u: "Hknm3L/sQwec4jc1CZLilA==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"58M3FVXTYZvcUQHEpkNmvw=="", lastmod: Timestamp(1, 1654), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "58M3FVXTYZvcUQHEpkNmvw==" }, max: { r: "gce-us-west1", u: "59PzpvDsF5jUtNPG0N6DIw==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"EuBhQkIa6+C7p5Q2ftbT7g=="", lastmod: Timestamp(1, 1657), lastmodEpoch: Obje Jan 13 15:35:19 ivy mongos[27723]: ctId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "EuBhQkIa6+C7p5Q2ftbT7g==" }, max: { r: "gce-us-west1", u: "EvDLGm5siZi4byflgyOoWw==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"jcJbbw41w5J8vVxrtq9U8A=="", lastmod: Timestamp(1, 1660), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "jcJbbw41w5J8vVxrtq9U8A==" }, max: { r: "gce-us-west1", u: "jdKwj/cR4A27FQ51ZPJa9A==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"CXum88GVvVTKpgycRid0FQ=="", lastmod: Timestamp(1, 1663), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "CXum88GVvVTKpgycRid0FQ==" }, max: { r: "gce-us-west1", u: "CZ13X44351P7jpg3/U/FQA==" }, shard: "sessions_gce_us_west1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_west1" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-west1"u_"k3uf2STm69CtVLn00sRgjQ=="", lastmod: Timestamp(1, 1669), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-west1", u: "k3uf2STm69CtVLn00sRgjQ==" }, max: { r: "gce-us-west1", u: "k4sS9HCnKv43CzQevscGJw==" }, shard: "sessions_gce_us_west1", history: [ { validAf .......... _api.sessions4", min: { r: "gce-us-east1", u: "UrWGskpdRgYmcZbevwHtNQ==" }, max: { r: "gce-us-east1", u: "Uswg5veRo2MiRh3ecM29Gw==" }, shard: "sessions_gce_us_east1_2", history: [ { validAfter: Timestamp(1546991981, 1285), shard: "sessions_gce_us_east1_2" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-east1"u_"Uswg5veRo2MiRh3ecM29Gw=="", lastmod: Timestamp(2072, 0), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-east1", u: "Uswg5veRo2MiRh3ecM29Gw==" }, max: { r: "gce-u Jan 13 15:35:19 ivy mongos[27723]: s-east1", u: "UtKkiETYYmyfPY+qcyq53w==" }, shard: "sessions_gce_us_east1_2", history: [ { validAfter: Timestamp(1546992107, 578), shard: "sessions_gce_us_east1_2" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-east1"u_"UtKkiETYYmyfPY+qcyq53w=="", lastmod: Timestamp(2073, 0), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-east1", u: "UtKkiETYYmyfPY+qcyq53w==" }, max: { r: "gce-us-east1", u: "UulyIA3t17zXdbwla8UcyA==" }, shard: "sessions_gce_us_east1_2", history: [ { validAfter: Timestamp(1546992387, 355), shard: "sessions_gce_us_east1_2" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-east1"u_"UulyIA3t17zXdbwla8UcyA=="", lastmod: Timestamp(2074, 0), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-east1", u: "UulyIA3t17zXdbwla8UcyA==" }, max: { r: "gce-us-east1", u: "UwAbyt9D6Zo1qlfk0XptAQ==" }, shard: "sessions_gce_us_east1_2", history: [ { validAfter: Timestamp(1546992652, 172), shard: "sessions_gce_us_east1_2" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-east1"u_"UwAbyt9D6Zo1qlfk0XptAQ=="", lastmod: Timestamp(2075, 0), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-east1", u: "UwAbyt9D6Zo1qlfk0XptAQ==" }, max: { r: "gce-us-east1", u: "Uwa+Dw0ZrJLFb7mVj/GykQ==" }, shard: "sessions_gce_us_east1_2", history: [ { validAfter: Timestamp(1546992771, 293), shard: "sessions_gce_us_east1_2" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-east1"u_"Uwa+Dw0ZrJLFb7mVj/GykQ=="", lastmod: Timestamp(2076, 0), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visitor_api.sessions4", min: { r: "gce-us-east1", u: "Uwa+Dw0ZrJLFb7mVj/GykQ==" }, max: { r: "gce-us-east1", u: "UxOVavlFZXtKRL5MnB+1uQ==" }, shard: "sessions_gce_us_east1_2", history: [ { validAfter: Timestamp(1546992957, 229), shard: "sessions_gce_us_east1_2" } ] }, { _id: "visitor_api.sessions4-r_"gce-us-central1"u_MaxKey", lastmod: Timestamp(2076, 1), lastmodEpoch: ObjectId('5c004c2bf113b95c328ec37a'), ns: "visi Jan 13 15:35:19 ivy mongos[27723]: tor_api.sessions4", min: { r: "gce-us-central1", u: MaxKey }, max: { r: "gce-us-east1", u: MinKey }, shard: "sessions_gce_us_east1", history: [ { validAfter: Timestamp(1543523371, 5), shard: "sessions_gce_us_east1" } ] } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393718, 697), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393718, 550), t: 1 }, lastOpVisible: { ts: Timestamp(1547393718, 550), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393718, 550), $clusterTime: { clusterTime: Timestamp(1547393718, 735), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.075+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.140+0000 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection visitor_api.sessions4 to version 2076|1||5c004c2bf113b95c328ec37a took 528 ms Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.141+0000 D QUERY [conn34] Beginning planning... Jan 13 15:35:19 ivy mongos[27723]: ============================= Jan 13 15:35:19 ivy mongos[27723]: Options = NO_TABLE_SCAN Jan 13 15:35:19 ivy mongos[27723]: Canonical query: Jan 13 15:35:19 ivy mongos[27723]: ns=visitor_api.sessions4Tree: $and Jan 13 15:35:19 ivy mongos[27723]: r $eq "gce-us-east1" Jan 13 15:35:19 ivy mongos[27723]: u $lt "V" Jan 13 15:35:19 ivy mongos[27723]: Sort: {} Jan 13 15:35:19 ivy mongos[27723]: Proj: {} Jan 13 15:35:19 ivy mongos[27723]: ============================= Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.147+0000 D QUERY [conn34] Index 0 is kp: { r: 1.0, u: 1.0 } name: 'shardkey' Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.148+0000 D QUERY [conn34] Predicate over field 'u' Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.148+0000 D QUERY [conn34] Predicate over field 'r' Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.148+0000 D QUERY [conn34] Relevant index 0 is kp: { r: 1.0, u: 1.0 } name: 'shardkey' Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.149+0000 D QUERY [conn34] Rated tree: Jan 13 15:35:19 ivy mongos[27723]: $and Jan 13 15:35:19 ivy mongos[27723]: r $eq "gce-us-east1" || First: 0 notFirst: full path: r Jan 13 15:35:19 ivy mongos[27723]: u $lt "V" || First: notFirst: 0 full path: u Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.149+0000 D QUERY [conn34] Tagging memoID 1 Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.149+0000 D QUERY [conn34] Enumerator: memo just before moving: Jan 13 15:35:19 ivy mongos[27723]: [Node #1]: AND enumstate counter 0 Jan 13 15:35:19 ivy mongos[27723]: choice 0: Jan 13 15:35:19 ivy mongos[27723]: subnodes: Jan 13 15:35:19 ivy mongos[27723]: idx[0] Jan 13 15:35:19 ivy mongos[27723]: pos 0 pred r $eq "gce-us-east1" Jan 13 15:35:19 ivy mongos[27723]: pos 1 pred u $lt "V" Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.149+0000 D QUERY [conn34] About to build solntree from tagged tree: Jan 13 15:35:19 ivy mongos[27723]: $and Jan 13 15:35:19 ivy mongos[27723]: r $eq "gce-us-east1" || Selected Index #0 pos 0 combine 1 Jan 13 15:35:19 ivy mongos[27723]: u $lt "V" || Selected Index #0 pos 1 combine 1 Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.149+0000 D QUERY [conn34] Planner: adding solution: Jan 13 15:35:19 ivy mongos[27723]: FETCH Jan 13 15:35:19 ivy mongos[27723]: ---fetched = 1 Jan 13 15:35:19 ivy mongos[27723]: ---sortedByDiskLoc = 0 Jan 13 15:35:19 ivy mongos[27723]: ---getSort = [{ r: 1 }, { r: 1, u: 1 }, { u: 1 }, ] Jan 13 15:35:19 ivy mongos[27723]: ---Child: Jan 13 15:35:19 ivy mongos[27723]: ------IXSCAN Jan 13 15:35:19 ivy mongos[27723]: ---------indexName = shardkey Jan 13 15:35:19 ivy mongos[27723]: keyPattern = { r: 1.0, u: 1.0 } Jan 13 15:35:19 ivy mongos[27723]: ---------direction = 1 Jan 13 15:35:19 ivy mongos[27723]: ---------bounds = field #0['r']: ["gce-us-east1", "gce-us-east1"], field #1['u']: ["", "V") Jan 13 15:35:19 ivy mongos[27723]: ---------fetched = 0 Jan 13 15:35:19 ivy mongos[27723]: ---------sortedByDiskLoc = 0 Jan 13 15:35:19 ivy mongos[27723]: ---------getSort = [{ r: 1 }, { r: 1, u: 1 }, { u: 1 }, ] Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.149+0000 D QUERY [conn34] Planner: outputted 1 indexed solutions. Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.152+0000 D EXECUTOR [conn34] Scheduling remote command request: RemoteCommand 32 -- target:phil.node.gce-us-east1.admiral:27017 db:visitor_api cmd:{ explain: { find: "sessions4", filter: { r: "gce-us-east1", u: { $lt: "V" } }, limit: 2.0, singleBatch: false, lsid: { id: UUID("8b64ac7e-d8e7-4248-bd43-3e20300b615e") } }, verbosity: "queryPlanner", allowImplicitCollectionCreation: false, shardVersion: [ Timestamp(2076, 1), ObjectId('5c004c2bf113b95c328ec37a') ], lsid: { id: UUID("8b64ac7e-d8e7-4248-bd43-3e20300b615e"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) } } Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.152+0000 D ASIO [conn34] startCommand: RemoteCommand 32 -- target:phil.node.gce-us-east1.admiral:27017 db:visitor_api cmd:{ explain: { find: "sessions4", filter: { r: "gce-us-east1", u: { $lt: "V" } }, limit: 2.0, singleBatch: false, lsid: { id: UUID("8b64ac7e-d8e7-4248-bd43-3e20300b615e") } }, verbosity: "queryPlanner", allowImplicitCollectionCreation: false, shardVersion: [ Timestamp(2076, 1), ObjectId('5c004c2bf113b95c328ec37a') ], lsid: { id: UUID("8b64ac7e-d8e7-4248-bd43-3e20300b615e"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) } } Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.153+0000 I ASIO [TaskExecutorPool-0] Connecting to phil.node.gce-us-east1.admiral:27017 Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.153+0000 D ASIO [TaskExecutorPool-0] Finished connection setup. Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.154+0000 D EXECUTOR [conn34] Scheduling remote command request: RemoteCommand 33 -- target:queen.node.gce-us-east1.admiral:27017 db:visitor_api cmd:{ explain: { find: "sessions4", filter: { r: "gce-us-east1", u: { $lt: "V" } }, limit: 2.0, singleBatch: false, lsid: { id: UUID("8b64ac7e-d8e7-4248-bd43-3e20300b615e") } }, verbosity: "queryPlanner", allowImplicitCollectionCreation: false, shardVersion: [ Timestamp(2076, 0), ObjectId('5c004c2bf113b95c328ec37a') ], lsid: { id: UUID("8b64ac7e-d8e7-4248-bd43-3e20300b615e"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) } } Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.154+0000 D ASIO [conn34] startCommand: RemoteCommand 33 -- target:queen.node.gce-us-east1.admiral:27017 db:visitor_api cmd:{ explain: { find: "sessions4", filter: { r: "gce-us-east1", u: { $lt: "V" } }, limit: 2.0, singleBatch: false, lsid: { id: UUID("8b64ac7e-d8e7-4248-bd43-3e20300b615e") } }, verbosity: "queryPlanner", allowImplicitCollectionCreation: false, shardVersion: [ Timestamp(2076, 0), ObjectId('5c004c2bf113b95c328ec37a') ], lsid: { id: UUID("8b64ac7e-d8e7-4248-bd43-3e20300b615e"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) } } Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.154+0000 I ASIO [TaskExecutorPool-0] Connecting to queen.node.gce-us-east1.admiral:27017 Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.154+0000 D ASIO [TaskExecutorPool-0] Finished connection setup. Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.154+0000 D EXECUTOR [ConfigServerCatalogCacheLoader-0] Not reaping because the earliest retirement date is 2019-01-13T15:35:48.612+0000 Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.243+0000 D NETWORK [TaskExecutorPool-0] Starting client-side compression negotiation Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.243+0000 D NETWORK [TaskExecutorPool-0] Offering snappy compressor to server Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.243+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.280+0000 D NETWORK [TaskExecutorPool-0] Finishing client-side compression negotiation Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.280+0000 D NETWORK [TaskExecutorPool-0] Received message compressors from server Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.280+0000 D NETWORK [TaskExecutorPool-0] Adding compressor snappy Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.280+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.280+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.280+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.280+0000 D NETWORK [conn34] Compressing message with snappy Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.293+0000 D NETWORK [TaskExecutorPool-0] Starting client-side compression negotiation Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.293+0000 D NETWORK [TaskExecutorPool-0] Offering snappy compressor to server Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.293+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.329+0000 D NETWORK [TaskExecutorPool-0] Finishing client-side compression negotiation Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.330+0000 D NETWORK [TaskExecutorPool-0] Received message compressors from server Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.330+0000 D NETWORK [TaskExecutorPool-0] Adding compressor snappy Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.330+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.330+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.330+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.330+0000 D NETWORK [conn34] Compressing message with snappy Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.383+0000 D NETWORK [conn34] Decompressing message with snappy Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.383+0000 D ASIO [conn34] Request 33 finished with response: { queryPlanner: { plannerVersion: 1, namespace: "visitor_api.sessions4", indexFilterSet: false, parsedQuery: { $and: [ { r: { $eq: "gce-us-east1" } }, { u: { $lt: "V" } } ] }, winningPlan: { stage: "LIMIT", limitAmount: 2, inputStage: { stage: "FETCH", inputStage: { stage: "SHARDING_FILTER", inputStage: { stage: "IXSCAN", keyPattern: { r: 1.0, ss: 1.0, tsc: 1.0, tslp: 1.0, u: 1.0 }, indexName: "r_1_ss_1_tsc_1_tslp_1_u_1", isMultiKey: false, multiKeyPaths: { r: [], ss: [], tsc: [], tslp: [], u: [] }, isUnique: false, isSparse: false, isPartial: false, indexVersion: 2, direction: "forward", indexBounds: { r: [ "["gce-us-east1", "gce-us-east1"]" ], ss: [ "[MinKey, MaxKey]" ], tsc: [ "[MinKey, MaxKey]" ], tslp: [ "[MinKey, MaxKey]" ], u: [ "["", "V")" ] } } } } }, rejectedPlans: [ { stage: "LIMIT", limitAmount: 2, inputStage: { stage: "SHARDING_FILTER", inputStage: { stage: "FETCH", filter: { u: { $lt: "V" } }, inputStage: { stage: "IXSCAN", keyPattern: { r: 1.0, e: 1.0, ss: 1.0, tsc: 1.0, tslp: 1.0 }, indexName: "r_1_e_1_ss_1_tsc_1_tslp_1", isMultiKey: false, multiKeyPaths: { r: [], e: [], ss: [], tsc: [], tslp: [] }, isUnique: false, isSparse: true, isPartial: false, indexVersion: 2, direction: "forward", indexBounds: { r: [ "["gce-us-east1", "gce-us-east1"]" ], e: [ "[MinKey, MaxKey]" ], ss: [ "[MinKey, MaxKey]" ], tsc: [ "[MinKey, MaxKey]" ], tslp: [ "[MinKey, MaxKey]" ] } } } } }, { stage: "LIMIT", limitAmount: 2, inputStage: { stage: "FETCH", inputStage: { stage: "SHARDING_FILTER", inputStage: { stage: "IXSCAN", keyPattern: { r: 1, u: 1, pid: 1, oid: 1, incr: 1 }, indexName: "r_1_u_1_pid_1_oid_1_incr_1", isMultiKey: false, multiKeyPaths: { r: [], u: [], pid: [], oid: [], incr: [] }, isUnique: true, isSparse: false, isPartial: false, indexVersion: 2, direction: "forward", indexBounds: { r: [ "["gce-us-east1", "gce-us-east1"]" ], u: [ "["", "V")" ], pid: [ "[MinKey, MaxKey]" ], oid: [ "[MinKey, MaxKey]" ], incr: [ "[MinKey, MaxKey Jan 13 15:35:19 ivy mongos[27723]: ]" ] } } } } } ] }, serverInfo: { host: "queen", port: 27017, version: "4.0.5", gitVersion: "3739429dd92b92d1b0ab120911a23d50bf03c412" }, ok: 1.0, operationTime: Timestamp(1547393719, 484), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000003') }, lastCommittedOpTime: Timestamp(1547393719, 398), $configServerState: { opTime: { ts: Timestamp(1547393718, 1183), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393719, 484), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:19 ivy mongos[27723]: 2019-01-13T15:35:19.383+0000 D EXECUTOR [conn34] Received remote response: RemoteResponse -- cmd:{ queryPlanner: { plannerVersion: 1, namespace: "visitor_api.sessions4", indexFilterSet: false, parsedQuery: { $and: [ { r: { $eq: "gce-us-east1" } }, { u: { $lt: "V" } } ] }, winningPlan: { stage: "LIMIT", limitAmount: 2, inputStage: { stage: "FETCH", inputStage: { stage: "SHARDING_FILTER", inputStage: { stage: "IXSCAN", keyPattern: { r: 1.0, ss: 1.0, tsc: 1.0, tslp: 1.0, u: 1.0 }, indexName: "r_1_ss_1_tsc_1_tslp_1_u_1", isMultiKey: false, multiKeyPaths: { r: [], ss: [], tsc: [], tslp: [], u: [] }, isUnique: false, isSparse: false, isPartial: false, indexVersion: 2, direction: "forward", indexBounds: { r: [ "["gce-us-east1", "gce-us-east1"]" ], ss: [ "[MinKey, MaxKey]" ], tsc: [ "[MinKey, MaxKey]" ], tslp: [ "[MinKey, MaxKey]" ], u: [ "["", "V")" ] } } } } }, rejectedPlans: [ { stage: "LIMIT", limitAmount: 2, inputStage: { stage: "SHARDING_FILTER", inputStage: { stage: "FETCH", filter: { u: { $lt: "V" } }, inputStage: { stage: "IXSCAN", keyPattern: { r: 1.0, e: 1.0, ss: 1.0, tsc: 1.0, tslp: 1.0 }, indexName: "r_1_e_1_ss_1_tsc_1_tslp_1", isMultiKey: false, multiKeyPaths: { r: [], e: [], ss: [], tsc: [], tslp: [] }, isUnique: false, isSparse: true, isPartial: false, indexVersion: 2, direction: "forward", indexBounds: { r: [ "["gce-us-east1", "gce-us-east1"]" ], e: [ "[MinKey, MaxKey]" ], ss: [ "[MinKey, MaxKey]" ], tsc: [ "[MinKey, MaxKey]" ], tslp: [ "[MinKey, MaxKey]" ] } } } } }, { stage: "LIMIT", limitAmount: 2, inputStage: { stage: "FETCH", inputStage: { stage: "SHARDING_FILTER", inputStage: { stage: "IXSCAN", keyPattern: { r: 1, u: 1, pid: 1, oid: 1, incr: 1 }, indexName: "r_1_u_1_pid_1_oid_1_incr_1", isMultiKey: false, multiKeyPaths: { r: [], u: [], pid: [], oid: [], incr: [] }, isUnique: true, isSparse: false, isPartial: false, indexVersion: 2, direction: "forward", indexBounds: { r: [ "["gce-us-east1", "gce-us-east1"]" ], u: [ "["", "V")" ], pid: [ "[MinKey, MaxKey]" ], oid: [ "[MinKey, MaxKey]" ], incr: [ "[ Jan 13 15:35:19 ivy mongos[27723]: MinKey, MaxKey]" ] } } } } } ] }, serverInfo: { host: "queen", port: 27017, version: "4.0.5", gitVersion: "3739429dd92b92d1b0ab120911a23d50bf03c412" }, ok: 1.0, operationTime: Timestamp(1547393719, 484), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000003') }, lastCommittedOpTime: Timestamp(1547393719, 398), $configServerState: { opTime: { ts: Timestamp(1547393718, 1183), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393719, 484), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.604+0000 D TRACKING [Uptime reporter] Cmd: NotSet, TrackingId: 5c3b5ab8a1824195fadc0fb9 Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.604+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 34 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:35:50.604+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393720604), up: 10, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.604+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 34 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:35:50.604+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393720604), up: 10, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.604+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.604+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.604+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.604+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.809+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.809+0000 D ASIO [ShardRegistry] Request 34 finished with response: { n: 1, nModified: 1, opTime: { ts: Timestamp(1547393720, 752), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393720, 752), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393720, 752), t: 1 }, lastOpVisible: { ts: Timestamp(1547393720, 752), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393720, 752), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393720, 752), $clusterTime: { clusterTime: Timestamp(1547393720, 1038), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.809+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ n: 1, nModified: 1, opTime: { ts: Timestamp(1547393720, 752), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393720, 752), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393720, 752), t: 1 }, lastOpVisible: { ts: Timestamp(1547393720, 752), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393720, 752), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393720, 752), $clusterTime: { clusterTime: Timestamp(1547393720, 1038), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.809+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.809+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 35 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:35:50.809+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393720, 752), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.809+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 35 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:35:50.809+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393720, 752), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.809+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.809+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.809+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.809+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.849+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.849+0000 D ASIO [ShardRegistry] Request 35 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393720, 1038), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393720, 752), t: 1 }, lastOpVisible: { ts: Timestamp(1547393720, 752), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393720, 752), $clusterTime: { clusterTime: Timestamp(1547393720, 1038), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.849+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393720, 1038), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393720, 752), t: 1 }, lastOpVisible: { ts: Timestamp(1547393720, 752), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393720, 752), $clusterTime: { clusterTime: Timestamp(1547393720, 1038), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.849+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.849+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 36 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:35:50.849+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393720, 752), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.849+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 36 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:35:50.849+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393720, 752), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.849+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.849+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.849+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.849+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.888+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.888+0000 D ASIO [ShardRegistry] Request 36 finished with response: { cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393720, 1038), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393720, 752), t: 1 }, lastOpVisible: { ts: Timestamp(1547393720, 752), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393720, 752), $clusterTime: { clusterTime: Timestamp(1547393720, 1038), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.888+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393720, 1038), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393720, 752), t: 1 }, lastOpVisible: { ts: Timestamp(1547393720, 752), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393720, 752), $clusterTime: { clusterTime: Timestamp(1547393720, 1038), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.888+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.888+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 37 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:35:50.888+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393720, 752), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.888+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 37 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:35:50.888+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393720, 752), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.888+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.888+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.888+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.888+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.928+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.928+0000 D ASIO [ShardRegistry] Request 37 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393720, 1038), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393720, 913), t: 1 }, lastOpVisible: { ts: Timestamp(1547393720, 913), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393720, 913), $clusterTime: { clusterTime: Timestamp(1547393720, 1326), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.928+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393720, 1038), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393720, 913), t: 1 }, lastOpVisible: { ts: Timestamp(1547393720, 913), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393720, 913), $clusterTime: { clusterTime: Timestamp(1547393720, 1326), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:20 ivy mongos[27723]: 2019-01-13T15:35:20.928+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.223+0000 D SHARDING [conn30] Command begin db: admin msg id: 35 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.223+0000 D SHARDING [conn30] Command end db: admin msg id: 35 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.223+0000 I COMMAND [conn30] query admin.1 command: { buildInfo: "1", $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:1340 0ms Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.226+0000 D SHARDING [conn30] Command begin db: admin msg id: 37 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.226+0000 D NETWORK [conn30] Starting server-side compression negotiation Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.226+0000 D NETWORK [conn30] Compression negotiation not requested by client Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.226+0000 D SHARDING [conn30] Command end db: admin msg id: 37 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.226+0000 I COMMAND [conn30] command admin.$cmd command: isMaster { isMaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.227+0000 I NETWORK [listener] connection accepted from 127.0.0.1:27733 #37 (3 connections now open) Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.227+0000 D EXECUTOR [listener] Starting new executor thread in passthrough mode Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.230+0000 D SHARDING [conn37] Command begin db: admin msg id: 1 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.230+0000 D SHARDING [conn37] Command end db: admin msg id: 1 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.230+0000 I COMMAND [conn37] command admin.$cmd command: getnonce { getnonce: 1, $db: "admin" } numYields:0 reslen:206 protocol:op_query 0ms Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.230+0000 D SHARDING [conn37] Command begin db: admin msg id: 3 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.230+0000 D SHARDING [conn37] Command end db: admin msg id: 3 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.230+0000 I COMMAND [conn37] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:178 protocol:op_query 0ms Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.331+0000 D SHARDING [conn37] Command begin db: admin msg id: 5 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.332+0000 D SHARDING [conn37] Command end db: admin msg id: 5 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.332+0000 I COMMAND [conn37] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $db: "admin" } numYields:0 reslen:10255 protocol:op_query 1ms Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.333+0000 D SHARDING [conn37] Command begin db: config msg id: 7 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.333+0000 D EXECUTOR [conn37] Scheduling remote command request: RemoteCommand 38 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.333+0000 D ASIO [conn37] startCommand: RemoteCommand 38 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.334+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.334+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.334+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.334+0000 D NETWORK [conn37] Compressing message with snappy Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.372+0000 D NETWORK [conn37] Decompressing message with snappy Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.372+0000 D ASIO [conn37] Request 38 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393729, 153), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393729, 1), $clusterTime: { clusterTime: Timestamp(1547393729, 232), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.372+0000 D EXECUTOR [conn37] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393729, 153), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393729, 1), $clusterTime: { clusterTime: Timestamp(1547393729, 232), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.372+0000 D SHARDING [conn37] Command end db: config msg id: 7 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.372+0000 I COMMAND [conn37] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 38ms Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.373+0000 D SHARDING [conn37] Command begin db: config msg id: 9 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.373+0000 D TRACKING [conn37] Cmd: aggregate, TrackingId: 5c3b5ac1a1824195fadc0fc4 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.373+0000 D EXECUTOR [conn37] Scheduling remote command request: RemoteCommand 39 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.373+0000 D ASIO [conn37] startCommand: RemoteCommand 39 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.373+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.373+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.373+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.373+0000 D NETWORK [conn37] Compressing message with snappy Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.461+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.461+0000 D ASIO [ShardRegistry] Request 39 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393729, 153), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393729, 153), t: 1 }, lastOpVisible: { ts: Timestamp(1547393729, 153), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393720, 752), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393729, 153), $clusterTime: { clusterTime: Timestamp(1547393729, 311), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.461+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393729, 153), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393729, 153), t: 1 }, lastOpVisible: { ts: Timestamp(1547393729, 153), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393720, 752), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393729, 153), $clusterTime: { clusterTime: Timestamp(1547393729, 311), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.461+0000 D SHARDING [conn37] Command end db: config msg id: 9 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.461+0000 I COMMAND [conn37] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 88ms Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.462+0000 D SHARDING [conn37] Command begin db: config msg id: 11 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.462+0000 D EXECUTOR [conn37] Scheduling remote command request: RemoteCommand 40 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.462+0000 D ASIO [conn37] startCommand: RemoteCommand 40 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.462+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.462+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.462+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.462+0000 D NETWORK [conn37] Compressing message with snappy Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.498+0000 D NETWORK [conn37] Decompressing message with snappy Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.498+0000 D ASIO [conn37] Request 40 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393729, 153), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393729, 153), $clusterTime: { clusterTime: Timestamp(1547393729, 386), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.498+0000 D EXECUTOR [conn37] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393729, 153), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393729, 153), $clusterTime: { clusterTime: Timestamp(1547393729, 386), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.499+0000 D SHARDING [conn37] Command end db: config msg id: 11 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.499+0000 I COMMAND [conn37] query config.settings command: { find: "settings", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:315 37ms Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.501+0000 D SHARDING [conn37] Command begin db: config msg id: 13 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.501+0000 D TRACKING [conn37] Cmd: aggregate, TrackingId: 5c3b5ac1a1824195fadc0fc7 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.501+0000 D EXECUTOR [conn37] Scheduling remote command request: RemoteCommand 41 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393129500) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.501+0000 D ASIO [conn37] startCommand: RemoteCommand 41 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393129500) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.501+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.501+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.501+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.501+0000 D NETWORK [conn37] Compressing message with snappy Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.553+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.553+0000 D ASIO [ShardRegistry] Request 41 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393729, 153), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393729, 153), t: 1 }, lastOpVisible: { ts: Timestamp(1547393729, 153), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393720, 752), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393729, 153), $clusterTime: { clusterTime: Timestamp(1547393729, 495), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.553+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393729, 153), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393729, 153), t: 1 }, lastOpVisible: { ts: Timestamp(1547393729, 153), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393720, 752), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393729, 153), $clusterTime: { clusterTime: Timestamp(1547393729, 495), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.553+0000 D SHARDING [conn37] Command end db: config msg id: 13 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.553+0000 I COMMAND [conn37] query config.changelog command: { aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393129500) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:245 52ms Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.554+0000 D SHARDING [conn37] Command begin db: config msg id: 15 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.554+0000 D EXECUTOR [conn37] Scheduling remote command request: RemoteCommand 42 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.554+0000 D ASIO [conn37] startCommand: RemoteCommand 42 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.554+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.554+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.554+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.554+0000 D NETWORK [conn37] Compressing message with snappy Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.591+0000 D NETWORK [conn37] Decompressing message with snappy Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.591+0000 D ASIO [conn37] Request 42 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393729, 153), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393729, 153), $clusterTime: { clusterTime: Timestamp(1547393729, 495), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.591+0000 D EXECUTOR [conn37] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393729, 153), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393729, 153), $clusterTime: { clusterTime: Timestamp(1547393729, 495), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.591+0000 D SHARDING [conn37] Command end db: config msg id: 15 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.591+0000 I COMMAND [conn37] query config.shards command: { find: "shards", filter: {}, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:1834 37ms Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.592+0000 D SHARDING [conn37] Command begin db: config msg id: 17 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.592+0000 D EXECUTOR [conn37] Scheduling remote command request: RemoteCommand 43 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.592+0000 D ASIO [conn37] startCommand: RemoteCommand 43 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.592+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.592+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.592+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.592+0000 D NETWORK [conn37] Compressing message with snappy Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.628+0000 D NETWORK [conn37] Decompressing message with snappy Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.628+0000 D ASIO [conn37] Request 43 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393729, 153), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393729, 153), $clusterTime: { clusterTime: Timestamp(1547393729, 582), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.628+0000 D EXECUTOR [conn37] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393729, 153), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393729, 153), $clusterTime: { clusterTime: Timestamp(1547393729, 582), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.628+0000 D SHARDING [conn37] Command end db: config msg id: 17 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.629+0000 I COMMAND [conn37] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 36ms Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.629+0000 D SHARDING [conn37] Command begin db: config msg id: 19 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.629+0000 D TRACKING [conn37] Cmd: aggregate, TrackingId: 5c3b5ac1a1824195fadc0fcb Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.629+0000 D EXECUTOR [conn37] Scheduling remote command request: RemoteCommand 44 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.629+0000 D ASIO [conn37] startCommand: RemoteCommand 44 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.629+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.629+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.629+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.629+0000 D NETWORK [conn37] Compressing message with snappy Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.704+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.704+0000 D ASIO [ShardRegistry] Request 44 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393729, 153), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393729, 153), t: 1 }, lastOpVisible: { ts: Timestamp(1547393729, 153), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393720, 752), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393729, 153), $clusterTime: { clusterTime: Timestamp(1547393729, 641), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.704+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393729, 153), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393729, 153), t: 1 }, lastOpVisible: { ts: Timestamp(1547393729, 153), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393720, 752), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393729, 153), $clusterTime: { clusterTime: Timestamp(1547393729, 641), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.704+0000 D SHARDING [conn37] Command end db: config msg id: 19 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.704+0000 I COMMAND [conn37] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 75ms Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.706+0000 D SHARDING [conn37] Command begin db: config msg id: 21 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.706+0000 D TRACKING [conn37] Cmd: aggregate, TrackingId: 5c3b5ac1a1824195fadc0fcd Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.706+0000 D EXECUTOR [conn37] Scheduling remote command request: RemoteCommand 45 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.706+0000 D ASIO [conn37] startCommand: RemoteCommand 45 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.706+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.706+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.706+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.706+0000 D NETWORK [conn37] Compressing message with snappy Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.743+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.743+0000 D ASIO [ShardRegistry] Request 45 finished with response: { cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393729, 655), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393729, 153), t: 1 }, lastOpVisible: { ts: Timestamp(1547393729, 153), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393720, 752), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393729, 153), $clusterTime: { clusterTime: Timestamp(1547393729, 659), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.743+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393729, 655), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393729, 153), t: 1 }, lastOpVisible: { ts: Timestamp(1547393729, 153), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393720, 752), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393729, 153), $clusterTime: { clusterTime: Timestamp(1547393729, 659), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.743+0000 D SHARDING [conn37] Command end db: config msg id: 21 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.743+0000 I COMMAND [conn37] query config.databases command: { aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:270 37ms Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.744+0000 D SHARDING [conn37] Command begin db: config msg id: 23 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.744+0000 D EXECUTOR [conn37] Scheduling remote command request: RemoteCommand 46 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.744+0000 D ASIO [conn37] startCommand: RemoteCommand 46 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.744+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.744+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.744+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.744+0000 D NETWORK [conn37] Compressing message with snappy Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.780+0000 D NETWORK [conn37] Decompressing message with snappy Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.780+0000 D ASIO [conn37] Request 46 finished with response: { n: 3, ok: 1.0, operationTime: Timestamp(1547393729, 655), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393729, 153), $clusterTime: { clusterTime: Timestamp(1547393729, 670), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.780+0000 D EXECUTOR [conn37] Received remote response: RemoteResponse -- cmd:{ n: 3, ok: 1.0, operationTime: Timestamp(1547393729, 655), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393729, 153), $clusterTime: { clusterTime: Timestamp(1547393729, 670), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.780+0000 D SHARDING [conn37] Command end db: config msg id: 23 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.780+0000 I COMMAND [conn37] query config.collections command: { count: "collections", query: { dropped: false }, $db: "config" } numYields:0 reslen:210 36ms Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.781+0000 D SHARDING [conn37] Command begin db: config msg id: 25 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.781+0000 D EXECUTOR [conn37] Scheduling remote command request: RemoteCommand 47 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393129781) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.781+0000 D ASIO [conn37] startCommand: RemoteCommand 47 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393129781) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.781+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.781+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.781+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.781+0000 D NETWORK [conn37] Compressing message with snappy Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.818+0000 D NETWORK [conn37] Decompressing message with snappy Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.818+0000 D ASIO [conn37] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Request 47 finished with response: { cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393722460), up: 3486919, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393720862), up: 3433057, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393719789), up: 3486817, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393725853), up: 664, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393723011), up: 74669, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393728649), up: 74700, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393728721), up: 74674, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393726134), up: 74644, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393722597), up: 74640, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393723486), up: 74613, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.node.gce-us-east Jan 13 15:35:29 ivy mongos[27723]: 1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393720970), up: 74583, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393725581), up: 74615, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393719607), up: 74582, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393724173), up: 74560, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393726984), up: 74563, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393725784), up: 74507, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393729750), up: 74540, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393723280), up: 74534, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393724767), up: 74506, waiting: true }, { _id: "jacob:270 .......... 75114, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393727348), up: 75078, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393725750), up: 75113, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393726464), up: 75872, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", Jan 13 15:35:29 ivy mongos[27723]: "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393722164), up: 75927, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393721748), up: 75928, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393720847), up: 75866, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393723880), up: 76459, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393723092), up: 76458, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393719827), up: 76395, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393719793), up: 76245, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393719824), up: 76395, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393719796), up: 76183, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393723848), up: 76249, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393719996), up: 76183, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393719998), up: 76057, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393721629), up: 76122, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393721729) Jan 13 15:35:29 ivy mongos[27723]: , up: 76123, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393720604), up: 10, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393719796), up: 75996, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393728953), up: 76066, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393729, 721), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393729, 153), $clusterTime: { clusterTime: Timestamp(1547393729, 721), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.819+0000 D EXECUTOR [conn37] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393722460), up: 3486919, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393720862), up: 3433057, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393719789), up: 3486817, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393725853), up: 664, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393723011), up: 74669, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393728649), up: 74700, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393728721), up: 74674, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393726134), up: 74644, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393722597), up: 74640, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393723486), up: 74613, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.no Jan 13 15:35:29 ivy mongos[27723]: de.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393720970), up: 74583, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393725581), up: 74615, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393719607), up: 74582, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393724173), up: 74560, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393726984), up: 74563, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393725784), up: 74507, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393729750), up: 74540, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393723280), up: 74534, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393724767), up: 74506, waiting: true }, { _ .......... 75114, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393727348), up: 75078, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393725750), up: 75113, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393726464), up: 75872, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", Jan 13 15:35:29 ivy mongos[27723]: "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393722164), up: 75927, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393721748), up: 75928, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393720847), up: 75866, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393723880), up: 76459, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393723092), up: 76458, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393719827), up: 76395, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393719793), up: 76245, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393719824), up: 76395, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393719796), up: 76183, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393723848), up: 76249, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393719996), up: 76183, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393719998), up: 76057, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393721629), up: 76122, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393721729) Jan 13 15:35:29 ivy mongos[27723]: , up: 76123, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393720604), up: 10, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393719796), up: 75996, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393728953), up: 76066, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393729, 721), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393729, 153), $clusterTime: { clusterTime: Timestamp(1547393729, 721), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.819+0000 D SHARDING [conn37] Command end db: config msg id: 25 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.819+0000 I COMMAND [conn37] query config.mongos command: { find: "mongos", filter: { ping: { $gte: new Date(1547393129781) } }, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:63 reslen:9894 38ms Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.820+0000 D SHARDING [conn37] Command begin db: config msg id: 27 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.820+0000 D EXECUTOR [conn37] Scheduling remote command request: RemoteCommand 48 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.820+0000 D ASIO [conn37] startCommand: RemoteCommand 48 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.820+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.820+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.820+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.820+0000 D NETWORK [conn37] Compressing message with snappy Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.860+0000 D NETWORK [conn37] Decompressing message with snappy Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.860+0000 D ASIO [conn37] Request 48 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393729, 721), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393729, 655), $clusterTime: { clusterTime: Timestamp(1547393729, 759), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.860+0000 D EXECUTOR [conn37] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393729, 721), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393729, 655), $clusterTime: { clusterTime: Timestamp(1547393729, 759), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.861+0000 D SHARDING [conn37] Command end db: config msg id: 27 Jan 13 15:35:29 ivy mongos[27723]: 2019-01-13T15:35:29.861+0000 I COMMAND [conn37] query config.locks command: { find: "locks", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:241 40ms Jan 13 15:35:30 ivy mongos[27723]: 2019-01-13T15:35:30.928+0000 D TRACKING [Uptime reporter] Cmd: NotSet, TrackingId: 5c3b5ac2a1824195fadc0fd2 Jan 13 15:35:30 ivy mongos[27723]: 2019-01-13T15:35:30.928+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 49 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:00.928+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393730928), up: 20, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:35:30 ivy mongos[27723]: 2019-01-13T15:35:30.928+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 49 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:00.928+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393730928), up: 20, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:35:30 ivy mongos[27723]: 2019-01-13T15:35:30.929+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:35:30 ivy mongos[27723]: 2019-01-13T15:35:30.929+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:30 ivy mongos[27723]: 2019-01-13T15:35:30.929+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:30 ivy mongos[27723]: 2019-01-13T15:35:30.929+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.175+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.175+0000 D ASIO [ShardRegistry] Request 49 finished with response: { n: 1, nModified: 1, opTime: { ts: Timestamp(1547393730, 1033), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393730, 1033), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393730, 1033), t: 1 }, lastOpVisible: { ts: Timestamp(1547393730, 1033), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393730, 1033), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393730, 1033), $clusterTime: { clusterTime: Timestamp(1547393731, 136), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.175+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ n: 1, nModified: 1, opTime: { ts: Timestamp(1547393730, 1033), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393730, 1033), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393730, 1033), t: 1 }, lastOpVisible: { ts: Timestamp(1547393730, 1033), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393730, 1033), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393730, 1033), $clusterTime: { clusterTime: Timestamp(1547393731, 136), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.175+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.176+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 50 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:36:01.176+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393730, 1033), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.176+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 50 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:36:01.176+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393730, 1033), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.176+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.176+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.176+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.176+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.276+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.277+0000 D ASIO [ShardRegistry] Request 50 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393731, 32), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393731, 32), t: 1 }, lastOpVisible: { ts: Timestamp(1547393731, 32), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393731, 32), $clusterTime: { clusterTime: Timestamp(1547393731, 184), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.277+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393731, 32), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393731, 32), t: 1 }, lastOpVisible: { ts: Timestamp(1547393731, 32), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393731, 32), $clusterTime: { clusterTime: Timestamp(1547393731, 184), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.277+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.277+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 51 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:36:01.277+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393731, 32), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.277+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 51 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:36:01.277+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393731, 32), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.277+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.277+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.277+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.277+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.315+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.316+0000 D ASIO [ShardRegistry] Request 51 finished with response: { cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393731, 140), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393731, 32), t: 1 }, lastOpVisible: { ts: Timestamp(1547393731, 32), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393731, 32), $clusterTime: { clusterTime: Timestamp(1547393731, 184), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.316+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393731, 140), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393731, 32), t: 1 }, lastOpVisible: { ts: Timestamp(1547393731, 32), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393731, 32), $clusterTime: { clusterTime: Timestamp(1547393731, 184), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.316+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.316+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 52 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:36:01.316+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393731, 32), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.316+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 52 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:36:01.316+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393731, 32), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.316+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.316+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.316+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.316+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.354+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.354+0000 D ASIO [ShardRegistry] Request 52 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393731, 140), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393731, 32), t: 1 }, lastOpVisible: { ts: Timestamp(1547393731, 32), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393731, 32), $clusterTime: { clusterTime: Timestamp(1547393731, 277), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.354+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393731, 140), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393731, 32), t: 1 }, lastOpVisible: { ts: Timestamp(1547393731, 32), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393731, 32), $clusterTime: { clusterTime: Timestamp(1547393731, 277), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:31 ivy mongos[27723]: 2019-01-13T15:35:31.354+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.684+0000 D SHARDING [shard registry reload] Reloading shardRegistry Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.684+0000 D TRACKING [shard registry reload] Cmd: NotSet, TrackingId: 5c3b5ac9a1824195fadc0fd7 Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.684+0000 D EXECUTOR [shard registry reload] Scheduling remote command request: RemoteCommand 53 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:36:07.684+0000 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393731, 32), t: 1 } }, maxTimeMS: 30000 } Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.684+0000 D ASIO [shard registry reload] startCommand: RemoteCommand 53 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:36:07.684+0000 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393731, 32), t: 1 } }, maxTimeMS: 30000 } Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.684+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.684+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.684+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.684+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.724+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.724+0000 D ASIO [ShardRegistry] Request 53 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393737, 535), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393737, 453), t: 1 }, lastOpVisible: { ts: Timestamp(1547393737, 453), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, la Jan 13 15:35:37 ivy mongos[27723]: stCommittedOpTime: Timestamp(1547393737, 453), $clusterTime: { clusterTime: Timestamp(1547393737, 535), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.724+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393737, 535), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393737, 453), t: 1 }, lastOpVisible: { ts: Timestamp(1547393737, 453), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000 Jan 13 15:35:37 ivy mongos[27723]: 000000') }, lastCommittedOpTime: Timestamp(1547393737, 453), $clusterTime: { clusterTime: Timestamp(1547393737, 535), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.724+0000 D SHARDING [shard registry reload] found 7 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1547393737, 453), t: 1 } Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.724+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017 Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.724+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_east1, with CS sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017 Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.724+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017 Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.724+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_central1, with CS sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017 Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.724+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017 Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.724+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_west1, with CS sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017 Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.724+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017 Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.724+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_europe_west1, with CS sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017 Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.724+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017 Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.724+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_europe_west2, with CS sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017 Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.724+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017 Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.724+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_europe_west3, with CS sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017 Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.724+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017 Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.724+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_east1_2, with CS sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017 Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.724+0000 D SHARDING [shard registry reload] Adding shard config, with CS sessions_config/ira.node.gce-us-east1.admiral:27019,jasper.node.gce-us-west1.admiral:27019,kratos.node.gce-europe-west3.admiral:27019,leon.node.gce-us-east1.admiral:27019,mateo.node.gce-us-west1.admiral:27019,newton.node.gce-europe-west3.admiral:27019 Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.724+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.789+0000 D TRACKING [replSetDistLockPinger] Cmd: NotSet, TrackingId: 5c3b5ac9a1824195fadc0fd9 Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.789+0000 D EXECUTOR [replSetDistLockPinger] Scheduling remote command request: RemoteCommand 54 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:07.789+0000 cmd:{ findAndModify: "lockpings", query: { _id: "ivy:27018:1547393707:-6945163188777852108" }, update: { $set: { ping: new Date(1547393737789) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.789+0000 D ASIO [replSetDistLockPinger] startCommand: RemoteCommand 54 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:07.789+0000 cmd:{ findAndModify: "lockpings", query: { _id: "ivy:27018:1547393707:-6945163188777852108" }, update: { $set: { ping: new Date(1547393737789) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.789+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.789+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.789+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.789+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.975+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.976+0000 D ASIO [ShardRegistry] Request 54 finished with response: { lastErrorObject: { n: 1, updatedExisting: true }, value: { _id: "ivy:27018:1547393707:-6945163188777852108", ping: new Date(1547393707057) }, ok: 1.0, operationTime: Timestamp(1547393737, 680), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393737, 680), t: 1 }, lastOpVisible: { ts: Timestamp(1547393737, 680), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393737, 680), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393737, 680), $clusterTime: { clusterTime: Timestamp(1547393737, 898), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.976+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ lastErrorObject: { n: 1, updatedExisting: true }, value: { _id: "ivy:27018:1547393707:-6945163188777852108", ping: new Date(1547393707057) }, ok: 1.0, operationTime: Timestamp(1547393737, 680), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393737, 680), t: 1 }, lastOpVisible: { ts: Timestamp(1547393737, 680), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393737, 680), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393737, 680), $clusterTime: { clusterTime: Timestamp(1547393737, 898), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:37 ivy mongos[27723]: 2019-01-13T15:35:37.976+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.284+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_config Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.284+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.320+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.320+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host ira.node.gce-us-east1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: true, secondary: false, primary: "ira.node.gce-us-east1.admiral:27019", me: "ira.node.gce-us-east1.admiral:27019", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1547393739, 184), t: 1 }, lastWriteDate: new Date(1547393739000), majorityOpTime: { ts: Timestamp(1547393739, 107), t: 1 }, majorityWriteDate: new Date(1547393739000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393739300), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393739, 184), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393739, 107), $clusterTime: { clusterTime: Timestamp(1547393739, 184), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.320+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ira.node.gce-us-east1.admiral:27019 lastWriteDate to 2019-01-13T15:35:39.000+0000 Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.320+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ira.node.gce-us-east1.admiral:27019 opTime to { ts: Timestamp(1547393739, 184), t: 1 } Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.320+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.360+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.360+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host mateo.node.gce-us-west1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "mateo.node.gce-us-west1.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393739, 184), t: 1 }, lastWriteDate: new Date(1547393739000), majorityOpTime: { ts: Timestamp(1547393739, 107), t: 1 }, majorityWriteDate: new Date(1547393739000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393739335), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393739, 184), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393739, 107), $clusterTime: { clusterTime: Timestamp(1547393739, 184), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.360+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating mateo.node.gce-us-west1.admiral:27019 lastWriteDate to 2019-01-13T15:35:39.000+0000 Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.360+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating mateo.node.gce-us-west1.admiral:27019 opTime to { ts: Timestamp(1547393739, 184), t: 1 } Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.360+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.397+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.398+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host leon.node.gce-us-east1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "leon.node.gce-us-east1.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393739, 184), t: 1 }, lastWriteDate: new Date(1547393739000), majorityOpTime: { ts: Timestamp(1547393739, 129), t: 1 }, majorityWriteDate: new Date(1547393739000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393739374), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393739, 184), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393739, 129), $clusterTime: { clusterTime: Timestamp(1547393739, 287), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.398+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating leon.node.gce-us-east1.admiral:27019 lastWriteDate to 2019-01-13T15:35:39.000+0000 Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.398+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating leon.node.gce-us-east1.admiral:27019 opTime to { ts: Timestamp(1547393739, 184), t: 1 } Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.398+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.504+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.504+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host kratos.node.gce-europe-west3.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "kratos.node.gce-europe-west3.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393739, 184), t: 1 }, lastWriteDate: new Date(1547393739000), majorityOpTime: { ts: Timestamp(1547393739, 129), t: 1 }, majorityWriteDate: new Date(1547393739000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393739446), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393739, 184), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393739, 129), $clusterTime: { clusterTime: Timestamp(1547393739, 287), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.504+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating kratos.node.gce-europe-west3.admiral:27019 lastWriteDate to 2019-01-13T15:35:39.000+0000 Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.504+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating kratos.node.gce-europe-west3.admiral:27019 opTime to { ts: Timestamp(1547393739, 184), t: 1 } Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.504+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.610+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.611+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host newton.node.gce-europe-west3.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "newton.node.gce-europe-west3.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393739, 184), t: 1 }, lastWriteDate: new Date(1547393739000), majorityOpTime: { ts: Timestamp(1547393739, 184), t: 1 }, majorityWriteDate: new Date(1547393739000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393739553), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393739, 184), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393739, 184), $clusterTime: { clusterTime: Timestamp(1547393739, 416), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.611+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating newton.node.gce-europe-west3.admiral:27019 lastWriteDate to 2019-01-13T15:35:39.000+0000 Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.611+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating newton.node.gce-europe-west3.admiral:27019 opTime to { ts: Timestamp(1547393739, 184), t: 1 } Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.611+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.649+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.649+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host jasper.node.gce-us-west1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "jasper.node.gce-us-west1.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393739, 446), t: 1 }, lastWriteDate: new Date(1547393739000), majorityOpTime: { ts: Timestamp(1547393739, 184), t: 1 }, majorityWriteDate: new Date(1547393739000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393739627), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393739, 446), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393739, 184), $clusterTime: { clusterTime: Timestamp(1547393739, 446), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.649+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jasper.node.gce-us-west1.admiral:27019 lastWriteDate to 2019-01-13T15:35:39.000+0000 Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.649+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jasper.node.gce-us-west1.admiral:27019 opTime to { ts: Timestamp(1547393739, 446), t: 1 } Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.649+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_config took 365 msec Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.786+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_east1 Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.786+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.824+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.824+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host phil.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "zeta.node.gce-us-east1.admiral:27017", "phil.node.gce-us-east1.admiral:27017", "bambi.node.gce-us-central1.admiral:27017" ], arbiters: [ "elrond.node.gce-us-west1.admiral:27017", "dale.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1", setVersion: 19, ismaster: true, secondary: false, primary: "phil.node.gce-us-east1.admiral:27017", me: "phil.node.gce-us-east1.admiral:27017", electionId: ObjectId('7fffffff0000000000000016'), lastWrite: { opTime: { ts: Timestamp(1547393739, 742), t: 22 }, lastWriteDate: new Date(1547393739000), majorityOpTime: { ts: Timestamp(1547393739, 604), t: 22 }, majorityWriteDate: new Date(1547393739000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393739800), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393739, 742), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000016') }, lastCommittedOpTime: Timestamp(1547393739, 604), $configServerState: { opTime: { ts: Timestamp(1547393739, 446), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393739, 742), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.824+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating phil.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:35:39.000+0000 Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.824+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating phil.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393739, 742), t: 22 } Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.824+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.862+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.862+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host zeta.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "zeta.node.gce-us-east1.admiral:27017", "phil.node.gce-us-east1.admiral:27017", "bambi.node.gce-us-central1.admiral:27017" ], arbiters: [ "elrond.node.gce-us-west1.admiral:27017", "dale.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1", setVersion: 19, ismaster: false, secondary: true, primary: "phil.node.gce-us-east1.admiral:27017", me: "zeta.node.gce-us-east1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393739, 753), t: 22 }, lastWriteDate: new Date(1547393739000), majorityOpTime: { ts: Timestamp(1547393739, 630), t: 22 }, majorityWriteDate: new Date(1547393739000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393739838), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393739, 753), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393739, 630), $configServerState: { opTime: { ts: Timestamp(1547393728, 82), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393739, 769), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.862+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating zeta.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:35:39.000+0000 Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.862+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating zeta.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393739, 753), t: 22 } Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.862+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.864+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.864+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host bambi.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "zeta.node.gce-us-east1.admiral:27017", "phil.node.gce-us-east1.admiral:27017", "bambi.node.gce-us-central1.admiral:27017" ], arbiters: [ "elrond.node.gce-us-west1.admiral:27017", "dale.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1", setVersion: 19, ismaster: false, secondary: true, primary: "phil.node.gce-us-east1.admiral:27017", me: "bambi.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393739, 738), t: 22 }, lastWriteDate: new Date(1547393739000), majorityOpTime: { ts: Timestamp(1547393739, 630), t: 22 }, majorityWriteDate: new Date(1547393739000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393739860), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393739, 738), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393739, 630), $configServerState: { opTime: { ts: Timestamp(1547393733, 915), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393739, 770), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.864+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating bambi.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:35:39.000+0000 Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.864+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating bambi.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393739, 738), t: 22 } Jan 13 15:35:39 ivy mongos[27723]: 2019-01-13T15:35:39.864+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_east1 took 78 msec Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.167+0000 D TRACKING [UserCacheInvalidator] Cmd: NotSet, TrackingId: 5c3b5acca1824195fadc0fdb Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.167+0000 D EXECUTOR [UserCacheInvalidator] Scheduling remote command request: RemoteCommand 55 -- target:ira.node.gce-us-east1.admiral:27019 db:admin expDate:2019-01-13T15:36:10.167+0000 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.167+0000 D ASIO [UserCacheInvalidator] startCommand: RemoteCommand 55 -- target:ira.node.gce-us-east1.admiral:27019 db:admin expDate:2019-01-13T15:36:10.167+0000 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.167+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.167+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.167+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.167+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.203+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.203+0000 D ASIO [ShardRegistry] Request 55 finished with response: { cacheGeneration: ObjectId('5c002e8aad899acfb0bbfd1e'), ok: 1.0, operationTime: Timestamp(1547393740, 123), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393739, 909), t: 1 }, lastOpVisible: { ts: Timestamp(1547393739, 909), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393737, 680), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393739, 909), $clusterTime: { clusterTime: Timestamp(1547393740, 123), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.203+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cacheGeneration: ObjectId('5c002e8aad899acfb0bbfd1e'), ok: 1.0, operationTime: Timestamp(1547393740, 123), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393739, 909), t: 1 }, lastOpVisible: { ts: Timestamp(1547393739, 909), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393737, 680), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393739, 909), $clusterTime: { clusterTime: Timestamp(1547393740, 123), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.204+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.216+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_central1 Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.216+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.217+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.217+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host camden.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "percy.node.gce-us-central1.admiral:27017", "umbra.node.gce-us-west1.admiral:27017", "camden.node.gce-us-central1.admiral:27017" ], arbiters: [ "desmond.node.gce-us-east1.admiral:27017", "flint.node.gce-us-west1.admiral:27017" ], setName: "sessions_gce_us_central1", setVersion: 6, ismaster: true, secondary: false, primary: "camden.node.gce-us-central1.admiral:27017", me: "camden.node.gce-us-central1.admiral:27017", electionId: ObjectId('7fffffff0000000000000004'), lastWrite: { opTime: { ts: Timestamp(1547393740, 234), t: 4 }, lastWriteDate: new Date(1547393740000), majorityOpTime: { ts: Timestamp(1547393740, 79), t: 4 }, majorityWriteDate: new Date(1547393740000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393740213), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393740, 235), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000004') }, lastCommittedOpTime: Timestamp(1547393740, 79), $configServerState: { opTime: { ts: Timestamp(1547393739, 909), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393740, 236), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.218+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating camden.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:35:40.000+0000 Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.218+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating camden.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393740, 234), t: 4 } Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.218+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.257+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.257+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host umbra.node.gce-us-west1.admiral:27017 based on ismaster reply: { hosts: [ "percy.node.gce-us-central1.admiral:27017", "umbra.node.gce-us-west1.admiral:27017", "camden.node.gce-us-central1.admiral:27017" ], arbiters: [ "desmond.node.gce-us-east1.admiral:27017", "flint.node.gce-us-west1.admiral:27017" ], setName: "sessions_gce_us_central1", setVersion: 6, ismaster: false, secondary: true, primary: "camden.node.gce-us-central1.admiral:27017", me: "umbra.node.gce-us-west1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393740, 154), t: 4 }, lastWriteDate: new Date(1547393740000), majorityOpTime: { ts: Timestamp(1547393740, 79), t: 4 }, majorityWriteDate: new Date(1547393740000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393740232), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393740, 154), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393740, 79), $configServerState: { opTime: { ts: Timestamp(1547393739, 129), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393740, 155), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.257+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating umbra.node.gce-us-west1.admiral:27017 lastWriteDate to 2019-01-13T15:35:40.000+0000 Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.257+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating umbra.node.gce-us-west1.admiral:27017 opTime to { ts: Timestamp(1547393740, 154), t: 4 } Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.257+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.258+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.258+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host percy.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "percy.node.gce-us-central1.admiral:27017", "umbra.node.gce-us-west1.admiral:27017", "camden.node.gce-us-central1.admiral:27017" ], arbiters: [ "desmond.node.gce-us-east1.admiral:27017", "flint.node.gce-us-west1.admiral:27017" ], setName: "sessions_gce_us_central1", setVersion: 6, ismaster: false, secondary: true, primary: "camden.node.gce-us-central1.admiral:27017", me: "percy.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393740, 253), t: 4 }, lastWriteDate: new Date(1547393740000), majorityOpTime: { ts: Timestamp(1547393740, 98), t: 4 }, majorityWriteDate: new Date(1547393740000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393740253), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393740, 253), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393740, 98), $configServerState: { opTime: { ts: Timestamp(1547393730, 44), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393740, 254), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.258+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating percy.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:35:40.000+0000 Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.258+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating percy.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393740, 253), t: 4 } Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.259+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_central1 took 42 msec Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.814+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_west1 Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.814+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.854+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.854+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host tony.node.gce-us-west1.admiral:27017 based on ismaster reply: { hosts: [ "william.node.gce-us-west1.admiral:27017", "tony.node.gce-us-west1.admiral:27017", "chloe.node.gce-us-central1.admiral:27017" ], arbiters: [ "sarah.node.gce-us-east1.admiral:27017", "gerda.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_west1", setVersion: 82625, ismaster: true, secondary: false, primary: "tony.node.gce-us-west1.admiral:27017", me: "tony.node.gce-us-west1.admiral:27017", electionId: ObjectId('7fffffff000000000000001c'), lastWrite: { opTime: { ts: Timestamp(1547393740, 887), t: 28 }, lastWriteDate: new Date(1547393740000), majorityOpTime: { ts: Timestamp(1547393740, 849), t: 28 }, majorityWriteDate: new Date(1547393740000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393740829), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393740, 888), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff000000000000001c') }, lastCommittedOpTime: Timestamp(1547393740, 849), $configServerState: { opTime: { ts: Timestamp(1547393740, 475), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393740, 888), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.854+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating tony.node.gce-us-west1.admiral:27017 lastWriteDate to 2019-01-13T15:35:40.000+0000 Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.854+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating tony.node.gce-us-west1.admiral:27017 opTime to { ts: Timestamp(1547393740, 887), t: 28 } Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.854+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.893+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.893+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host william.node.gce-us-west1.admiral:27017 based on ismaster reply: { hosts: [ "william.node.gce-us-west1.admiral:27017", "tony.node.gce-us-west1.admiral:27017", "chloe.node.gce-us-central1.admiral:27017" ], arbiters: [ "sarah.node.gce-us-east1.admiral:27017", "gerda.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_west1", setVersion: 82625, ismaster: false, secondary: true, primary: "tony.node.gce-us-west1.admiral:27017", me: "william.node.gce-us-west1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393740, 907), t: 28 }, lastWriteDate: new Date(1547393740000), majorityOpTime: { ts: Timestamp(1547393740, 866), t: 28 }, majorityWriteDate: new Date(1547393740000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393740870), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393740, 907), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393740, 866), $configServerState: { opTime: { ts: Timestamp(1547393737, 928), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393740, 914), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.893+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating william.node.gce-us-west1.admiral:27017 lastWriteDate to 2019-01-13T15:35:40.000+0000 Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.893+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating william.node.gce-us-west1.admiral:27017 opTime to { ts: Timestamp(1547393740, 907), t: 28 } Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.894+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.896+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.896+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host chloe.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "william.node.gce-us-west1.admiral:27017", "tony.node.gce-us-west1.admiral:27017", "chloe.node.gce-us-central1.admiral:27017" ], arbiters: [ "sarah.node.gce-us-east1.admiral:27017", "gerda.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_west1", setVersion: 82625, ismaster: false, secondary: true, primary: "tony.node.gce-us-west1.admiral:27017", me: "chloe.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393740, 888), t: 28 }, lastWriteDate: new Date(1547393740000), majorityOpTime: { ts: Timestamp(1547393740, 849), t: 28 }, majorityWriteDate: new Date(1547393740000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393740891), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393740, 888), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393740, 849), $configServerState: { opTime: { ts: Timestamp(1547393736, 5), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393740, 889), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.896+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating chloe.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:35:40.000+0000 Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.896+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating chloe.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393740, 888), t: 28 } Jan 13 15:35:40 ivy mongos[27723]: 2019-01-13T15:35:40.896+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_west1 took 82 msec Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.355+0000 D TRACKING [Uptime reporter] Cmd: NotSet, TrackingId: 5c3b5acda1824195fadc0fdd Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.355+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 56 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:11.355+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393741355), up: 31, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.355+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 56 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:11.355+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393741355), up: 31, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.355+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.355+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.355+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.355+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.546+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.547+0000 D ASIO [ShardRegistry] Request 56 finished with response: { n: 1, nModified: 1, opTime: { ts: Timestamp(1547393741, 256), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393741, 256), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393741, 256), t: 1 }, lastOpVisible: { ts: Timestamp(1547393741, 256), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393741, 256), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393741, 256), $clusterTime: { clusterTime: Timestamp(1547393741, 458), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.547+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ n: 1, nModified: 1, opTime: { ts: Timestamp(1547393741, 256), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393741, 256), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393741, 256), t: 1 }, lastOpVisible: { ts: Timestamp(1547393741, 256), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393741, 256), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393741, 256), $clusterTime: { clusterTime: Timestamp(1547393741, 458), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.547+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 57 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:11.547+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393741, 256), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.547+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 57 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:11.547+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393741, 256), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.547+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.547+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.547+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.547+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.547+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.583+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.583+0000 D ASIO [ShardRegistry] Request 57 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393741, 256), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393741, 256), t: 1 }, lastOpVisible: { ts: Timestamp(1547393741, 256), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393741, 256), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393741, 256), $clusterTime: { clusterTime: Timestamp(1547393741, 558), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.583+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393741, 256), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393741, 256), t: 1 }, lastOpVisible: { ts: Timestamp(1547393741, 256), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393741, 256), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393741, 256), $clusterTime: { clusterTime: Timestamp(1547393741, 558), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.583+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 58 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:11.583+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393741, 256), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.583+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 58 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:11.583+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393741, 256), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.583+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.583+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.583+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.583+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.583+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.620+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.620+0000 D ASIO [ShardRegistry] Request 58 finished with response: { cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393741, 460), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393741, 285), t: 1 }, lastOpVisible: { ts: Timestamp(1547393741, 285), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393741, 285), $clusterTime: { clusterTime: Timestamp(1547393741, 582), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.620+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393741, 460), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393741, 285), t: 1 }, lastOpVisible: { ts: Timestamp(1547393741, 285), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393741, 285), $clusterTime: { clusterTime: Timestamp(1547393741, 582), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.620+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 59 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:11.620+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393741, 285), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.620+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 59 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:11.620+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393741, 285), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.620+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.620+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.620+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.620+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.620+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.657+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.657+0000 D ASIO [ShardRegistry] Request 59 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393741, 460), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393741, 285), t: 1 }, lastOpVisible: { ts: Timestamp(1547393741, 285), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393741, 285), $clusterTime: { clusterTime: Timestamp(1547393741, 585), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.657+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393741, 460), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393741, 285), t: 1 }, lastOpVisible: { ts: Timestamp(1547393741, 285), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393741, 285), $clusterTime: { clusterTime: Timestamp(1547393741, 585), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.657+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.847+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_europe_west1 Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.847+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.947+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.947+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host vivi.node.gce-europe-west1.admiral:27017 based on ismaster reply: { hosts: [ "vivi.node.gce-europe-west1.admiral:27017", "hilda.node.gce-europe-west2.admiral:27017" ], arbiters: [ "hubert.node.gce-europe-west3.admiral:27017" ], setName: "sessions_gce_europe_west1", setVersion: 4, ismaster: true, secondary: false, primary: "vivi.node.gce-europe-west1.admiral:27017", me: "vivi.node.gce-europe-west1.admiral:27017", electionId: ObjectId('7fffffff0000000000000009'), lastWrite: { opTime: { ts: Timestamp(1547393741, 773), t: 9 }, lastWriteDate: new Date(1547393741000), majorityOpTime: { ts: Timestamp(1547393741, 738), t: 9 }, majorityWriteDate: new Date(1547393741000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393741892), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393741, 773), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000009') }, lastCommittedOpTime: Timestamp(1547393741, 738), $configServerState: { opTime: { ts: Timestamp(1547393741, 460), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393741, 773), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.947+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating vivi.node.gce-europe-west1.admiral:27017 lastWriteDate to 2019-01-13T15:35:41.000+0000 Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.947+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating vivi.node.gce-europe-west1.admiral:27017 opTime to { ts: Timestamp(1547393741, 773), t: 9 } Jan 13 15:35:41 ivy mongos[27723]: 2019-01-13T15:35:41.947+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:42 ivy mongos[27723]: 2019-01-13T15:35:42.042+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:42 ivy mongos[27723]: 2019-01-13T15:35:42.043+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host hilda.node.gce-europe-west2.admiral:27017 based on ismaster reply: { hosts: [ "vivi.node.gce-europe-west1.admiral:27017", "hilda.node.gce-europe-west2.admiral:27017" ], arbiters: [ "hubert.node.gce-europe-west3.admiral:27017" ], setName: "sessions_gce_europe_west1", setVersion: 4, ismaster: false, secondary: true, primary: "vivi.node.gce-europe-west1.admiral:27017", me: "hilda.node.gce-europe-west2.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393741, 971), t: 9 }, lastWriteDate: new Date(1547393741000), majorityOpTime: { ts: Timestamp(1547393741, 970), t: 9 }, majorityWriteDate: new Date(1547393741000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393741991), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393741, 971), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000008') }, lastCommittedOpTime: Timestamp(1547393741, 970), $configServerState: { opTime: { ts: Timestamp(1547393737, 453), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393741, 999), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:42 ivy mongos[27723]: 2019-01-13T15:35:42.043+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating hilda.node.gce-europe-west2.admiral:27017 lastWriteDate to 2019-01-13T15:35:41.000+0000 Jan 13 15:35:42 ivy mongos[27723]: 2019-01-13T15:35:42.043+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating hilda.node.gce-europe-west2.admiral:27017 opTime to { ts: Timestamp(1547393741, 971), t: 9 } Jan 13 15:35:42 ivy mongos[27723]: 2019-01-13T15:35:42.043+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_europe_west1 took 195 msec Jan 13 15:35:42 ivy mongos[27723]: 2019-01-13T15:35:42.982+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_europe_west2 Jan 13 15:35:42 ivy mongos[27723]: 2019-01-13T15:35:42.982+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:43 ivy mongos[27723]: 2019-01-13T15:35:43.077+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:43 ivy mongos[27723]: 2019-01-13T15:35:43.077+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host ignis.node.gce-europe-west2.admiral:27017 based on ismaster reply: { hosts: [ "ignis.node.gce-europe-west2.admiral:27017", "keith.node.gce-europe-west3.admiral:27017" ], arbiters: [ "francis.node.gce-europe-west1.admiral:27017" ], setName: "sessions_gce_europe_west2", setVersion: 6, ismaster: true, secondary: false, primary: "ignis.node.gce-europe-west2.admiral:27017", me: "ignis.node.gce-europe-west2.admiral:27017", electionId: ObjectId('7fffffff0000000000000004'), lastWrite: { opTime: { ts: Timestamp(1547393743, 2), t: 4 }, lastWriteDate: new Date(1547393743000), majorityOpTime: { ts: Timestamp(1547393742, 637), t: 4 }, majorityWriteDate: new Date(1547393742000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393743025), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393743, 2), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000004') }, lastCommittedOpTime: Timestamp(1547393742, 637), $configServerState: { opTime: { ts: Timestamp(1547393742, 471), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393743, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:43 ivy mongos[27723]: 2019-01-13T15:35:43.077+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ignis.node.gce-europe-west2.admiral:27017 lastWriteDate to 2019-01-13T15:35:43.000+0000 Jan 13 15:35:43 ivy mongos[27723]: 2019-01-13T15:35:43.077+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ignis.node.gce-europe-west2.admiral:27017 opTime to { ts: Timestamp(1547393743, 2), t: 4 } Jan 13 15:35:43 ivy mongos[27723]: 2019-01-13T15:35:43.078+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:43 ivy mongos[27723]: 2019-01-13T15:35:43.184+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:43 ivy mongos[27723]: 2019-01-13T15:35:43.184+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host keith.node.gce-europe-west3.admiral:27017 based on ismaster reply: { hosts: [ "ignis.node.gce-europe-west2.admiral:27017", "keith.node.gce-europe-west3.admiral:27017" ], arbiters: [ "francis.node.gce-europe-west1.admiral:27017" ], setName: "sessions_gce_europe_west2", setVersion: 6, ismaster: false, secondary: true, primary: "ignis.node.gce-europe-west2.admiral:27017", me: "keith.node.gce-europe-west3.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393743, 36), t: 4 }, lastWriteDate: new Date(1547393743000), majorityOpTime: { ts: Timestamp(1547393743, 12), t: 4 }, majorityWriteDate: new Date(1547393743000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393743126), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393743, 36), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393743, 12), $configServerState: { opTime: { ts: Timestamp(1547393737, 233), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393743, 43), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:43 ivy mongos[27723]: 2019-01-13T15:35:43.184+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating keith.node.gce-europe-west3.admiral:27017 lastWriteDate to 2019-01-13T15:35:43.000+0000 Jan 13 15:35:43 ivy mongos[27723]: 2019-01-13T15:35:43.184+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating keith.node.gce-europe-west3.admiral:27017 opTime to { ts: Timestamp(1547393743, 36), t: 4 } Jan 13 15:35:43 ivy mongos[27723]: 2019-01-13T15:35:43.185+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_europe_west2 took 202 msec Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.082+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_europe_west3 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.082+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.188+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.188+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host albert.node.gce-europe-west3.admiral:27017 based on ismaster reply: { hosts: [ "albert.node.gce-europe-west3.admiral:27017", "jordan.node.gce-europe-west1.admiral:27017" ], arbiters: [ "garry.node.gce-europe-west2.admiral:27017" ], setName: "sessions_gce_europe_west3", setVersion: 6, ismaster: true, secondary: false, primary: "albert.node.gce-europe-west3.admiral:27017", me: "albert.node.gce-europe-west3.admiral:27017", electionId: ObjectId('7fffffff000000000000000a'), lastWrite: { opTime: { ts: Timestamp(1547393744, 62), t: 10 }, lastWriteDate: new Date(1547393744000), majorityOpTime: { ts: Timestamp(1547393744, 40), t: 10 }, majorityWriteDate: new Date(1547393744000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393744130), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393744, 62), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff000000000000000a') }, lastCommittedOpTime: Timestamp(1547393744, 40), $configServerState: { opTime: { ts: Timestamp(1547393743, 525), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393744, 62), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.188+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating albert.node.gce-europe-west3.admiral:27017 lastWriteDate to 2019-01-13T15:35:44.000+0000 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.188+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating albert.node.gce-europe-west3.admiral:27017 opTime to { ts: Timestamp(1547393744, 62), t: 10 } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.188+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.223+0000 D SHARDING [conn37] Command begin db: admin msg id: 29 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.223+0000 D SHARDING [conn37] Command end db: admin msg id: 29 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.223+0000 I COMMAND [conn37] query admin.1 command: { buildInfo: "1", $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:1340 0ms Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.223+0000 D SHARDING [conn37] Command begin db: admin msg id: 31 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.223+0000 D NETWORK [conn37] Starting server-side compression negotiation Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.223+0000 D NETWORK [conn37] Compression negotiation not requested by client Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.223+0000 D SHARDING [conn37] Command end db: admin msg id: 31 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.223+0000 I COMMAND [conn37] command admin.$cmd command: isMaster { isMaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.225+0000 D SHARDING [conn37] Command begin db: admin msg id: 33 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.225+0000 D SHARDING [conn37] Command end db: admin msg id: 33 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.225+0000 I COMMAND [conn37] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $db: "admin" } numYields:0 reslen:10255 protocol:op_query 0ms Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.227+0000 D SHARDING [conn30] Command begin db: admin msg id: 39 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.227+0000 D NETWORK [conn30] Starting server-side compression negotiation Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.227+0000 D NETWORK [conn30] Compression negotiation not requested by client Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.227+0000 D SHARDING [conn30] Command end db: admin msg id: 39 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.227+0000 I COMMAND [conn30] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.227+0000 D SHARDING [conn37] Command begin db: config msg id: 35 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.227+0000 D EXECUTOR [conn37] Scheduling remote command request: RemoteCommand 60 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.227+0000 D ASIO [conn37] startCommand: RemoteCommand 60 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.227+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.227+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.227+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.227+0000 D NETWORK [conn37] Compressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.231+0000 D SHARDING [conn30] Command begin db: admin msg id: 41 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.231+0000 D SHARDING [conn30] Command end db: admin msg id: 41 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.231+0000 I COMMAND [conn30] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:178 protocol:op_query 0ms Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.263+0000 D NETWORK [conn37] Decompressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.263+0000 D ASIO [conn37] Request 60 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393744, 94), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393744, 1), $clusterTime: { clusterTime: Timestamp(1547393744, 118), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.263+0000 D EXECUTOR [conn37] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393744, 94), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393744, 1), $clusterTime: { clusterTime: Timestamp(1547393744, 118), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.263+0000 D SHARDING [conn37] Command end db: config msg id: 35 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.263+0000 I COMMAND [conn37] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 36ms Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.264+0000 D SHARDING [conn37] Command begin db: config msg id: 37 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.264+0000 D TRACKING [conn37] Cmd: aggregate, TrackingId: 5c3b5ad0a1824195fadc0fe8 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.264+0000 D EXECUTOR [conn37] Scheduling remote command request: RemoteCommand 61 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.264+0000 D ASIO [conn37] startCommand: RemoteCommand 61 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.264+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.264+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.264+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.264+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.289+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.289+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host jordan.node.gce-europe-west1.admiral:27017 based on ismaster reply: { hosts: [ "albert.node.gce-europe-west3.admiral:27017", "jordan.node.gce-europe-west1.admiral:27017" ], arbiters: [ "garry.node.gce-europe-west2.admiral:27017" ], setName: "sessions_gce_europe_west3", setVersion: 6, ismaster: false, secondary: true, primary: "albert.node.gce-europe-west3.admiral:27017", me: "jordan.node.gce-europe-west1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393744, 83), t: 10 }, lastWriteDate: new Date(1547393744000), majorityOpTime: { ts: Timestamp(1547393744, 83), t: 10 }, majorityWriteDate: new Date(1547393744000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393744234), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393744, 83), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000009') }, lastCommittedOpTime: Timestamp(1547393744, 83), $configServerState: { opTime: { ts: Timestamp(1547393737, 317), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393744, 122), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.289+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jordan.node.gce-europe-west1.admiral:27017 lastWriteDate to 2019-01-13T15:35:44.000+0000 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.289+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jordan.node.gce-europe-west1.admiral:27017 opTime to { ts: Timestamp(1547393744, 83), t: 10 } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.289+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_europe_west3 took 207 msec Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.332+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.332+0000 D ASIO [ShardRegistry] Request 61 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393744, 94), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393744, 94), t: 1 }, lastOpVisible: { ts: Timestamp(1547393744, 94), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393741, 256), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393744, 94), $clusterTime: { clusterTime: Timestamp(1547393744, 166), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.332+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393744, 94), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393744, 94), t: 1 }, lastOpVisible: { ts: Timestamp(1547393744, 94), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393741, 256), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393744, 94), $clusterTime: { clusterTime: Timestamp(1547393744, 166), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.332+0000 D SHARDING [conn37] Command end db: config msg id: 37 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.332+0000 I COMMAND [conn37] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 68ms Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.332+0000 D SHARDING [conn37] Command begin db: config msg id: 39 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.333+0000 D EXECUTOR [conn37] Scheduling remote command request: RemoteCommand 62 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.333+0000 D ASIO [conn37] startCommand: RemoteCommand 62 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.333+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.333+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.333+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.333+0000 D NETWORK [conn37] Compressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.369+0000 D NETWORK [conn37] Decompressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.369+0000 D ASIO [conn37] Request 62 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393744, 94), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393744, 94), $clusterTime: { clusterTime: Timestamp(1547393744, 201), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.369+0000 D EXECUTOR [conn37] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393744, 94), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393744, 94), $clusterTime: { clusterTime: Timestamp(1547393744, 201), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.369+0000 D SHARDING [conn37] Command end db: config msg id: 39 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.369+0000 I COMMAND [conn37] query config.settings command: { find: "settings", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:315 36ms Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.370+0000 D SHARDING [conn37] Command begin db: config msg id: 41 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.370+0000 D TRACKING [conn37] Cmd: aggregate, TrackingId: 5c3b5ad0a1824195fadc0feb Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.370+0000 D EXECUTOR [conn37] Scheduling remote command request: RemoteCommand 63 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393144369) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.370+0000 D ASIO [conn37] startCommand: RemoteCommand 63 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393144369) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.370+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.370+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.370+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.370+0000 D NETWORK [conn37] Compressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.432+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.432+0000 D ASIO [ShardRegistry] Request 63 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393744, 94), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393744, 94), t: 1 }, lastOpVisible: { ts: Timestamp(1547393744, 94), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393741, 256), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393744, 94), $clusterTime: { clusterTime: Timestamp(1547393744, 235), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.432+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393744, 94), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393744, 94), t: 1 }, lastOpVisible: { ts: Timestamp(1547393744, 94), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393741, 256), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393744, 94), $clusterTime: { clusterTime: Timestamp(1547393744, 235), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.432+0000 D SHARDING [conn37] Command end db: config msg id: 41 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.432+0000 I COMMAND [conn37] query config.changelog command: { aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393144369) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:245 62ms Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.432+0000 D SHARDING [conn37] Command begin db: config msg id: 43 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.432+0000 D EXECUTOR [conn37] Scheduling remote command request: RemoteCommand 64 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.432+0000 D ASIO [conn37] startCommand: RemoteCommand 64 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.432+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.433+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.433+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.433+0000 D NETWORK [conn37] Compressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.469+0000 D NETWORK [conn37] Decompressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.469+0000 D ASIO [conn37] Request 64 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393744, 94), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393744, 94), $clusterTime: { clusterTime: Timestamp(1547393744, 235), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.469+0000 D EXECUTOR [conn37] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393744, 94), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393744, 94), $clusterTime: { clusterTime: Timestamp(1547393744, 235), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.469+0000 D SHARDING [conn37] Command end db: config msg id: 43 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.469+0000 I COMMAND [conn37] query config.shards command: { find: "shards", filter: {}, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:1834 37ms Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.470+0000 D SHARDING [conn37] Command begin db: config msg id: 45 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.470+0000 D EXECUTOR [conn37] Scheduling remote command request: RemoteCommand 65 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.470+0000 D ASIO [conn37] startCommand: RemoteCommand 65 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.470+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.470+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.470+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.470+0000 D NETWORK [conn37] Compressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.506+0000 D NETWORK [conn37] Decompressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.507+0000 D ASIO [conn37] Request 65 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393744, 94), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393744, 94), $clusterTime: { clusterTime: Timestamp(1547393744, 235), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.507+0000 D EXECUTOR [conn37] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393744, 94), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393744, 94), $clusterTime: { clusterTime: Timestamp(1547393744, 235), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.507+0000 D SHARDING [conn37] Command end db: config msg id: 45 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.507+0000 I COMMAND [conn37] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 36ms Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.507+0000 D SHARDING [conn37] Command begin db: config msg id: 47 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.507+0000 D TRACKING [conn37] Cmd: aggregate, TrackingId: 5c3b5ad0a1824195fadc0fef Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.507+0000 D EXECUTOR [conn37] Scheduling remote command request: RemoteCommand 66 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.507+0000 D ASIO [conn37] startCommand: RemoteCommand 66 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.507+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.507+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.507+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.507+0000 D NETWORK [conn37] Compressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.548+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_east1_2 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.548+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.585+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.586+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host queen.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: true, secondary: false, primary: "queen.node.gce-us-east1.admiral:27017", me: "queen.node.gce-us-east1.admiral:27017", electionId: ObjectId('7fffffff0000000000000003'), lastWrite: { opTime: { ts: Timestamp(1547393744, 413), t: 3 }, lastWriteDate: new Date(1547393744000), majorityOpTime: { ts: Timestamp(1547393744, 340), t: 3 }, majorityWriteDate: new Date(1547393744000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393744565), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393744, 413), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000003') }, lastCommittedOpTime: Timestamp(1547393744, 340), $configServerState: { opTime: { ts: Timestamp(1547393744, 94), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393744, 413), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.586+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating queen.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:35:44.000+0000 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.586+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating queen.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393744, 413), t: 3 } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.586+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.587+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.587+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host ralph.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: false, secondary: true, primary: "queen.node.gce-us-east1.admiral:27017", me: "ralph.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393744, 407), t: 3 }, lastWriteDate: new Date(1547393744000), majorityOpTime: { ts: Timestamp(1547393744, 340), t: 3 }, majorityWriteDate: new Date(1547393744000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393744582), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393744, 407), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393744, 340), $configServerState: { opTime: { ts: Timestamp(1547393737, 453), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393744, 408), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.587+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ralph.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:35:44.000+0000 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.587+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ralph.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393744, 407), t: 3 } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.587+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.588+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.588+0000 D ASIO [ShardRegistry] Request 66 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393744, 94), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393744, 94), t: 1 }, lastOpVisible: { ts: Timestamp(1547393744, 94), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393741, 256), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393744, 94), $clusterTime: { clusterTime: Timestamp(1547393744, 325), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.588+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393744, 94), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393744, 94), t: 1 }, lastOpVisible: { ts: Timestamp(1547393744, 94), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393741, 256), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393744, 94), $clusterTime: { clusterTime: Timestamp(1547393744, 325), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.588+0000 D SHARDING [conn37] Command end db: config msg id: 47 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.588+0000 I COMMAND [conn37] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 81ms Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.588+0000 D SHARDING [conn37] Command begin db: config msg id: 49 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.588+0000 D TRACKING [conn37] Cmd: aggregate, TrackingId: 5c3b5ad0a1824195fadc0ff1 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.589+0000 D EXECUTOR [conn37] Scheduling remote command request: RemoteCommand 67 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.589+0000 D ASIO [conn37] startCommand: RemoteCommand 67 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.589+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.589+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.589+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.589+0000 D NETWORK [conn37] Compressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.625+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.625+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host april.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: false, secondary: true, primary: "queen.node.gce-us-east1.admiral:27017", me: "april.node.gce-us-east1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393744, 439), t: 3 }, lastWriteDate: new Date(1547393744000), majorityOpTime: { ts: Timestamp(1547393744, 382), t: 3 }, majorityWriteDate: new Date(1547393744000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393744601), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393744, 439), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393744, 382), $configServerState: { opTime: { ts: Timestamp(1547393744, 94), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393744, 439), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.625+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating april.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:35:44.000+0000 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.625+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating april.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393744, 439), t: 3 } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.625+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_east1_2 took 76 msec Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.625+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.625+0000 D ASIO [ShardRegistry] Request 67 finished with response: { cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393744, 424), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393744, 94), t: 1 }, lastOpVisible: { ts: Timestamp(1547393744, 94), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393741, 256), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393744, 94), $clusterTime: { clusterTime: Timestamp(1547393744, 424), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.625+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393744, 424), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393744, 94), t: 1 }, lastOpVisible: { ts: Timestamp(1547393744, 94), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393741, 256), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393744, 94), $clusterTime: { clusterTime: Timestamp(1547393744, 424), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.625+0000 D SHARDING [conn37] Command end db: config msg id: 49 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.625+0000 I COMMAND [conn37] query config.databases command: { aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:270 37ms Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.626+0000 D SHARDING [conn37] Command begin db: config msg id: 51 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.626+0000 D EXECUTOR [conn37] Scheduling remote command request: RemoteCommand 68 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.626+0000 D ASIO [conn37] startCommand: RemoteCommand 68 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.626+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.626+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.626+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.626+0000 D NETWORK [conn37] Compressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.663+0000 D NETWORK [conn37] Decompressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.663+0000 D ASIO [conn37] Request 68 finished with response: { n: 3, ok: 1.0, operationTime: Timestamp(1547393744, 424), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393744, 94), $clusterTime: { clusterTime: Timestamp(1547393744, 424), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.663+0000 D EXECUTOR [conn37] Received remote response: RemoteResponse -- cmd:{ n: 3, ok: 1.0, operationTime: Timestamp(1547393744, 424), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393744, 94), $clusterTime: { clusterTime: Timestamp(1547393744, 424), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.664+0000 D SHARDING [conn37] Command end db: config msg id: 51 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.664+0000 I COMMAND [conn37] query config.collections command: { count: "collections", query: { dropped: false }, $db: "config" } numYields:0 reslen:210 37ms Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.665+0000 D SHARDING [conn37] Command begin db: config msg id: 53 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.665+0000 D EXECUTOR [conn37] Scheduling remote command request: RemoteCommand 69 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393144665) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.665+0000 D ASIO [conn37] startCommand: RemoteCommand 69 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393144665) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.665+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.665+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.665+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.665+0000 D NETWORK [conn37] Compressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.702+0000 D NETWORK [conn37] Decompressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.702+0000 D ASIO [conn37] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Request 69 finished with response: { cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393743154), up: 3486940, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393741492), up: 3433077, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393740440), up: 3486838, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393736072), up: 675, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393743378), up: 74689, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393738881), up: 74710, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393738947), up: 74684, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393736351), up: 74654, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393743040), up: 74661, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393743930), up: 74633, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.node.gce-us-east Jan 13 15:35:44 ivy mongos[27723]: 1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393741365), up: 74603, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393735802), up: 74625, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393740052), up: 74602, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393744544), up: 74580, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393737192), up: 74573, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393736006), up: 74517, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393739897), up: 74550, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393743725), up: 74554, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393734914), up: 74516, waiting: true }, { _id: "jacob:270 .......... 75124, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393737560), up: 75088, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393736045), up: 75123, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393736762), up: 75882, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja" Jan 13 15:35:44 ivy mongos[27723]: , "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393742853), up: 75948, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393742451), up: 75949, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393741480), up: 75887, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393734285), up: 76469, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393743864), up: 76479, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393740478), up: 76415, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393740442), up: 76266, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393740475), up: 76416, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393740444), up: 76203, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393734682), up: 76260, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393740647), up: 76204, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393740644), up: 76078, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393742327), up: 76143, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393742330 Jan 13 15:35:44 ivy mongos[27723]: ), up: 76144, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393741355), up: 31, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393740444), up: 76017, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393739222), up: 76077, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393744, 424), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393744, 94), $clusterTime: { clusterTime: Timestamp(1547393744, 434), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.702+0000 D EXECUTOR [conn37] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393743154), up: 3486940, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393741492), up: 3433077, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393740440), up: 3486838, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393736072), up: 675, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393743378), up: 74689, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393738881), up: 74710, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393738947), up: 74684, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393736351), up: 74654, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393743040), up: 74661, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393743930), up: 74633, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.no Jan 13 15:35:44 ivy mongos[27723]: de.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393741365), up: 74603, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393735802), up: 74625, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393740052), up: 74602, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393744544), up: 74580, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393737192), up: 74573, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393736006), up: 74517, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393739897), up: 74550, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393743725), up: 74554, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393734914), up: 74516, waiting: true }, { _ .......... 75124, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393737560), up: 75088, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393736045), up: 75123, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393736762), up: 75882, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja" Jan 13 15:35:44 ivy mongos[27723]: , "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393742853), up: 75948, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393742451), up: 75949, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393741480), up: 75887, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393734285), up: 76469, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393743864), up: 76479, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393740478), up: 76415, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393740442), up: 76266, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393740475), up: 76416, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393740444), up: 76203, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393734682), up: 76260, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393740647), up: 76204, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393740644), up: 76078, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393742327), up: 76143, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393742330 Jan 13 15:35:44 ivy mongos[27723]: ), up: 76144, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393741355), up: 31, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393740444), up: 76017, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393739222), up: 76077, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393744, 424), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393744, 94), $clusterTime: { clusterTime: Timestamp(1547393744, 434), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.705+0000 D SHARDING [conn37] Command end db: config msg id: 53 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.705+0000 I COMMAND [conn37] query config.mongos command: { find: "mongos", filter: { ping: { $gte: new Date(1547393144665) } }, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:63 reslen:9894 39ms Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.707+0000 D SHARDING [conn37] Command begin db: config msg id: 55 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.707+0000 D EXECUTOR [conn37] Scheduling remote command request: RemoteCommand 70 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.707+0000 D ASIO [conn37] startCommand: RemoteCommand 70 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.707+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.707+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.707+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.707+0000 D NETWORK [conn37] Compressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.743+0000 D NETWORK [conn37] Decompressing message with snappy Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.743+0000 D ASIO [conn37] Request 70 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393744, 456), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393744, 94), $clusterTime: { clusterTime: Timestamp(1547393744, 456), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.743+0000 D EXECUTOR [conn37] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393744, 456), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393744, 94), $clusterTime: { clusterTime: Timestamp(1547393744, 456), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.744+0000 D SHARDING [conn37] Command end db: config msg id: 55 Jan 13 15:35:44 ivy mongos[27723]: 2019-01-13T15:35:44.744+0000 I COMMAND [conn37] query config.locks command: { find: "locks", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:241 36ms Jan 13 15:35:48 ivy mongos[27723]: 2019-01-13T15:35:48.612+0000 D EXECUTOR [ConfigServerCatalogCacheLoader-0] Reaping this thread; next thread reaped no earlier than 2019-01-13T15:36:18.612+0000 Jan 13 15:35:48 ivy mongos[27723]: 2019-01-13T15:35:48.612+0000 D EXECUTOR [ConfigServerCatalogCacheLoader-0] shutting down thread in pool ConfigServerCatalogCacheLoader Jan 13 15:35:51 ivy mongos[27723]: 2019-01-13T15:35:51.657+0000 D TRACKING [Uptime reporter] Cmd: NotSet, TrackingId: 5c3b5ad7a1824195fadc0ff6 Jan 13 15:35:51 ivy mongos[27723]: 2019-01-13T15:35:51.657+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 71 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:21.657+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393751657), up: 41, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:35:51 ivy mongos[27723]: 2019-01-13T15:35:51.657+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 71 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:21.657+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393751657), up: 41, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:35:51 ivy mongos[27723]: 2019-01-13T15:35:51.657+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:35:51 ivy mongos[27723]: 2019-01-13T15:35:51.658+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:51 ivy mongos[27723]: 2019-01-13T15:35:51.658+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:35:51 ivy mongos[27723]: 2019-01-13T15:35:51.658+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.700+0000 D SHARDING [conn30] Command begin db: admin msg id: 43 Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.700+0000 D SHARDING [conn30] Command end db: admin msg id: 43 Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.700+0000 I COMMAND [conn30] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:178 protocol:op_query 0ms Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.707+0000 I NETWORK [listener] connection accepted from 127.0.0.1:28185 #38 (4 connections now open) Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.707+0000 D EXECUTOR [listener] Starting new executor thread in passthrough mode Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.707+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.707+0000 D ASIO [ShardRegistry] Request 71 finished with response: { n: 1, nModified: 1, opTime: { ts: Timestamp(1547393751, 468), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393751, 468), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393751, 468), t: 1 }, lastOpVisible: { ts: Timestamp(1547393751, 468), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393751, 468), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393751, 468), $clusterTime: { clusterTime: Timestamp(1547393751, 474), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.708+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ n: 1, nModified: 1, opTime: { ts: Timestamp(1547393751, 468), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393751, 468), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393751, 468), t: 1 }, lastOpVisible: { ts: Timestamp(1547393751, 468), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393751, 468), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393751, 468), $clusterTime: { clusterTime: Timestamp(1547393751, 474), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.708+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.708+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 72 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:36:36.708+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393751, 468), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.708+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 72 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:36:36.708+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393751, 468), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.708+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.708+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.708+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.708+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.708+0000 D SHARDING [conn37] Command begin db: admin msg id: 57 Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.709+0000 D SHARDING [conn37] Command end db: admin msg id: 57 Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.709+0000 I COMMAND [conn37] query admin.1 command: { buildInfo: "1", $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:1340 0ms Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.709+0000 D SHARDING [conn38] Command begin db: admin msg id: 1 Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.709+0000 D SHARDING [conn38] Command end db: admin msg id: 1 Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.709+0000 I COMMAND [conn38] command admin.$cmd command: getnonce { getnonce: 1, $db: "admin" } numYields:0 reslen:205 protocol:op_query 0ms Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.712+0000 D NETWORK [conn37] Session from 127.0.0.1:27733 encountered a network error during SourceMessage Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.712+0000 I NETWORK [conn37] end connection 127.0.0.1:27733 (3 connections now open) Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.712+0000 D NETWORK [conn37] Cancelling outstanding I/O operations on connection to 127.0.0.1:27733 Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.712+0000 D SHARDING [conn38] Command begin db: admin msg id: 3 Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.712+0000 D NETWORK [conn38] Starting server-side compression negotiation Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.712+0000 D NETWORK [conn38] Compression negotiation not requested by client Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.712+0000 D SHARDING [conn38] Command end db: admin msg id: 3 Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.712+0000 I COMMAND [conn38] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.748+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.748+0000 D ASIO [ShardRegistry] Request 72 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393766, 525), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393766, 424), t: 1 }, lastOpVisible: { ts: Timestamp(1547393766, 424), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393766, 424), $clusterTime: { clusterTime: Timestamp(1547393766, 553), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.748+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393766, 525), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393766, 424), t: 1 }, lastOpVisible: { ts: Timestamp(1547393766, 424), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393766, 424), $clusterTime: { clusterTime: Timestamp(1547393766, 553), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.749+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.749+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 73 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:36:36.749+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393766, 424), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.749+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 73 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:36:36.749+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393766, 424), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.749+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.749+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.749+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.749+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.789+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.789+0000 D ASIO [ShardRegistry] Request 73 finished with response: { cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393766, 525), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393766, 424), t: 1 }, lastOpVisible: { ts: Timestamp(1547393766, 424), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393766, 424), $clusterTime: { clusterTime: Timestamp(1547393766, 553), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.789+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393766, 525), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393766, 424), t: 1 }, lastOpVisible: { ts: Timestamp(1547393766, 424), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393766, 424), $clusterTime: { clusterTime: Timestamp(1547393766, 553), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.789+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.789+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 74 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:36:36.789+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393766, 424), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.789+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 74 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:36:36.789+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393766, 424), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.789+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.789+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.789+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.789+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.827+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.827+0000 D ASIO [ShardRegistry] Request 74 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393766, 578), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393766, 424), t: 1 }, lastOpVisible: { ts: Timestamp(1547393766, 424), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393766, 424), $clusterTime: { clusterTime: Timestamp(1547393766, 578), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.827+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393766, 578), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393766, 424), t: 1 }, lastOpVisible: { ts: Timestamp(1547393766, 424), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393766, 424), $clusterTime: { clusterTime: Timestamp(1547393766, 578), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:06 ivy mongos[27723]: 2019-01-13T15:36:06.827+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.684+0000 I ASIO [ShardRegistry] Ending idle connection to host ira.node.gce-us-east1.admiral:27019 because the pool meets constraints; 2 connections to that host remain open Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.684+0000 D NETWORK [ShardRegistry] Cancelling outstanding I/O operations on connection to 10.142.15.204:27019 Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.724+0000 D SHARDING [shard registry reload] Reloading shardRegistry Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.724+0000 D TRACKING [shard registry reload] Cmd: NotSet, TrackingId: 5c3b5ae7a1824195fadc0fff Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.724+0000 D EXECUTOR [shard registry reload] Scheduling remote command request: RemoteCommand 75 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:36:37.724+0000 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393766, 424), t: 1 } }, maxTimeMS: 30000 } Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.724+0000 D ASIO [shard registry reload] startCommand: RemoteCommand 75 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:36:37.724+0000 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393766, 424), t: 1 } }, maxTimeMS: 30000 } Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.724+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.724+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.724+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.724+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.763+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.763+0000 D ASIO [ShardRegistry] Request 75 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393767, 498), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393767, 370), t: 1 }, lastOpVisible: { ts: Timestamp(1547393767, 370), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, la Jan 13 15:36:07 ivy mongos[27723]: stCommittedOpTime: Timestamp(1547393767, 370), $clusterTime: { clusterTime: Timestamp(1547393767, 498), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.763+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393767, 498), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393767, 370), t: 1 }, lastOpVisible: { ts: Timestamp(1547393767, 370), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000 Jan 13 15:36:07 ivy mongos[27723]: 000000') }, lastCommittedOpTime: Timestamp(1547393767, 370), $clusterTime: { clusterTime: Timestamp(1547393767, 498), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.763+0000 D SHARDING [shard registry reload] found 7 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1547393767, 370), t: 1 } Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.763+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017 Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.763+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_east1, with CS sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017 Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.763+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017 Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.763+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_central1, with CS sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017 Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.763+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017 Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.763+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_west1, with CS sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017 Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.763+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017 Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.763+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_europe_west1, with CS sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017 Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.763+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017 Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.763+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_europe_west2, with CS sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017 Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.763+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017 Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.763+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_europe_west3, with CS sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017 Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.763+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017 Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.763+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_east1_2, with CS sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017 Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.763+0000 D SHARDING [shard registry reload] Adding shard config, with CS sessions_config/ira.node.gce-us-east1.admiral:27019,jasper.node.gce-us-west1.admiral:27019,kratos.node.gce-europe-west3.admiral:27019,leon.node.gce-us-east1.admiral:27019,mateo.node.gce-us-west1.admiral:27019,newton.node.gce-europe-west3.admiral:27019 Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.763+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.976+0000 D TRACKING [replSetDistLockPinger] Cmd: NotSet, TrackingId: 5c3b5ae7a1824195fadc1001 Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.976+0000 D EXECUTOR [replSetDistLockPinger] Scheduling remote command request: RemoteCommand 76 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:37.976+0000 cmd:{ findAndModify: "lockpings", query: { _id: "ivy:27018:1547393707:-6945163188777852108" }, update: { $set: { ping: new Date(1547393767976) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.976+0000 D ASIO [replSetDistLockPinger] startCommand: RemoteCommand 76 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:37.976+0000 cmd:{ findAndModify: "lockpings", query: { _id: "ivy:27018:1547393707:-6945163188777852108" }, update: { $set: { ping: new Date(1547393767976) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.977+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.977+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.977+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:07 ivy mongos[27723]: 2019-01-13T15:36:07.977+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:08 ivy mongos[27723]: 2019-01-13T15:36:08.201+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:36:08 ivy mongos[27723]: 2019-01-13T15:36:08.201+0000 D ASIO [ShardRegistry] Request 76 finished with response: { lastErrorObject: { n: 1, updatedExisting: true }, value: { _id: "ivy:27018:1547393707:-6945163188777852108", ping: new Date(1547393737789) }, ok: 1.0, operationTime: Timestamp(1547393767, 896), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393767, 896), t: 1 }, lastOpVisible: { ts: Timestamp(1547393767, 896), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393767, 896), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393767, 896), $clusterTime: { clusterTime: Timestamp(1547393768, 87), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:08 ivy mongos[27723]: 2019-01-13T15:36:08.201+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ lastErrorObject: { n: 1, updatedExisting: true }, value: { _id: "ivy:27018:1547393707:-6945163188777852108", ping: new Date(1547393737789) }, ok: 1.0, operationTime: Timestamp(1547393767, 896), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393767, 896), t: 1 }, lastOpVisible: { ts: Timestamp(1547393767, 896), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393767, 896), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393767, 896), $clusterTime: { clusterTime: Timestamp(1547393768, 87), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:08 ivy mongos[27723]: 2019-01-13T15:36:08.201+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.649+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_config Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.649+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.685+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.685+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host ira.node.gce-us-east1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: true, secondary: false, primary: "ira.node.gce-us-east1.admiral:27019", me: "ira.node.gce-us-east1.admiral:27019", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1547393769, 419), t: 1 }, lastWriteDate: new Date(1547393769000), majorityOpTime: { ts: Timestamp(1547393769, 315), t: 1 }, majorityWriteDate: new Date(1547393769000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393769665), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393769, 419), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393769, 315), $clusterTime: { clusterTime: Timestamp(1547393769, 456), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.685+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ira.node.gce-us-east1.admiral:27019 lastWriteDate to 2019-01-13T15:36:09.000+0000 Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.685+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ira.node.gce-us-east1.admiral:27019 opTime to { ts: Timestamp(1547393769, 419), t: 1 } Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.685+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.792+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.792+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host newton.node.gce-europe-west3.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "newton.node.gce-europe-west3.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393769, 419), t: 1 }, lastWriteDate: new Date(1547393769000), majorityOpTime: { ts: Timestamp(1547393769, 315), t: 1 }, majorityWriteDate: new Date(1547393769000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393769734), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393769, 419), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393769, 315), $clusterTime: { clusterTime: Timestamp(1547393769, 456), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.792+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating newton.node.gce-europe-west3.admiral:27019 lastWriteDate to 2019-01-13T15:36:09.000+0000 Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.792+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating newton.node.gce-europe-west3.admiral:27019 opTime to { ts: Timestamp(1547393769, 419), t: 1 } Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.792+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.830+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.830+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host leon.node.gce-us-east1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "leon.node.gce-us-east1.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393769, 703), t: 1 }, lastWriteDate: new Date(1547393769000), majorityOpTime: { ts: Timestamp(1547393769, 419), t: 1 }, majorityWriteDate: new Date(1547393769000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393769806), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393769, 703), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393769, 419), $clusterTime: { clusterTime: Timestamp(1547393769, 703), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.830+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating leon.node.gce-us-east1.admiral:27019 lastWriteDate to 2019-01-13T15:36:09.000+0000 Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.830+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating leon.node.gce-us-east1.admiral:27019 opTime to { ts: Timestamp(1547393769, 703), t: 1 } Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.830+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.869+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.869+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host jasper.node.gce-us-west1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "jasper.node.gce-us-west1.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393769, 703), t: 1 }, lastWriteDate: new Date(1547393769000), majorityOpTime: { ts: Timestamp(1547393769, 419), t: 1 }, majorityWriteDate: new Date(1547393769000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393769846), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393769, 703), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393769, 419), $clusterTime: { clusterTime: Timestamp(1547393769, 703), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.869+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jasper.node.gce-us-west1.admiral:27019 lastWriteDate to 2019-01-13T15:36:09.000+0000 Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.869+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jasper.node.gce-us-west1.admiral:27019 opTime to { ts: Timestamp(1547393769, 703), t: 1 } Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.869+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.975+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.975+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host kratos.node.gce-europe-west3.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "kratos.node.gce-europe-west3.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393769, 703), t: 1 }, lastWriteDate: new Date(1547393769000), majorityOpTime: { ts: Timestamp(1547393769, 419), t: 1 }, majorityWriteDate: new Date(1547393769000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393769916), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393769, 703), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393769, 419), $clusterTime: { clusterTime: Timestamp(1547393769, 819), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.975+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating kratos.node.gce-europe-west3.admiral:27019 lastWriteDate to 2019-01-13T15:36:09.000+0000 Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.975+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating kratos.node.gce-europe-west3.admiral:27019 opTime to { ts: Timestamp(1547393769, 703), t: 1 } Jan 13 15:36:09 ivy mongos[27723]: 2019-01-13T15:36:09.975+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.013+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.013+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host mateo.node.gce-us-west1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "mateo.node.gce-us-west1.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393769, 703), t: 1 }, lastWriteDate: new Date(1547393769000), majorityOpTime: { ts: Timestamp(1547393769, 703), t: 1 }, majorityWriteDate: new Date(1547393769000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393769990), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393769, 703), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393769, 703), $clusterTime: { clusterTime: Timestamp(1547393769, 887), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.013+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating mateo.node.gce-us-west1.admiral:27019 lastWriteDate to 2019-01-13T15:36:09.000+0000 Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.013+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating mateo.node.gce-us-west1.admiral:27019 opTime to { ts: Timestamp(1547393769, 703), t: 1 } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.013+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_config took 364 msec Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.014+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_east1 Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.014+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.051+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.051+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host phil.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "zeta.node.gce-us-east1.admiral:27017", "phil.node.gce-us-east1.admiral:27017", "bambi.node.gce-us-central1.admiral:27017" ], arbiters: [ "elrond.node.gce-us-west1.admiral:27017", "dale.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1", setVersion: 19, ismaster: true, secondary: false, primary: "phil.node.gce-us-east1.admiral:27017", me: "phil.node.gce-us-east1.admiral:27017", electionId: ObjectId('7fffffff0000000000000016'), lastWrite: { opTime: { ts: Timestamp(1547393770, 43), t: 22 }, lastWriteDate: new Date(1547393770000), majorityOpTime: { ts: Timestamp(1547393769, 924), t: 22 }, majorityWriteDate: new Date(1547393769000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393770028), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393770, 43), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000016') }, lastCommittedOpTime: Timestamp(1547393769, 924), $configServerState: { opTime: { ts: Timestamp(1547393769, 703), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393770, 44), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.051+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating phil.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:36:10.000+0000 Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.051+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating phil.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393770, 43), t: 22 } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.051+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.054+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.054+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host bambi.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "zeta.node.gce-us-east1.admiral:27017", "phil.node.gce-us-east1.admiral:27017", "bambi.node.gce-us-central1.admiral:27017" ], arbiters: [ "elrond.node.gce-us-west1.admiral:27017", "dale.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1", setVersion: 19, ismaster: false, secondary: true, primary: "phil.node.gce-us-east1.admiral:27017", me: "bambi.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393769, 962), t: 22 }, lastWriteDate: new Date(1547393769000), majorityOpTime: { ts: Timestamp(1547393769, 846), t: 22 }, majorityWriteDate: new Date(1547393769000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393770049), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393769, 962), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393769, 846), $configServerState: { opTime: { ts: Timestamp(1547393763, 735), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393770, 39), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.054+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating bambi.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:36:09.000+0000 Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.054+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating bambi.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393769, 962), t: 22 } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.054+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.091+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.091+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host zeta.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "zeta.node.gce-us-east1.admiral:27017", "phil.node.gce-us-east1.admiral:27017", "bambi.node.gce-us-central1.admiral:27017" ], arbiters: [ "elrond.node.gce-us-west1.admiral:27017", "dale.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1", setVersion: 19, ismaster: false, secondary: true, primary: "phil.node.gce-us-east1.admiral:27017", me: "zeta.node.gce-us-east1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393770, 48), t: 22 }, lastWriteDate: new Date(1547393770000), majorityOpTime: { ts: Timestamp(1547393769, 962), t: 22 }, majorityWriteDate: new Date(1547393769000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393770068), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393770, 48), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393769, 962), $configServerState: { opTime: { ts: Timestamp(1547393758, 179), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393770, 82), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.091+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating zeta.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:36:10.000+0000 Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.091+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating zeta.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393770, 48), t: 22 } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.091+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_east1 took 77 msec Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.166+0000 D COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.167+0000 D COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.167+0000 D - [PeriodicTaskRunner] cleaning up unused lock buckets of the global lock manager Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.167+0000 D COMMAND [PeriodicTaskRunner] task: UnusedLockCleaner took: 0ms Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.167+0000 D TRACKING [UserCacheInvalidator] Cmd: NotSet, TrackingId: 5c3b5aeaa1824195fadc1003 Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.167+0000 D EXECUTOR [UserCacheInvalidator] Scheduling remote command request: RemoteCommand 77 -- target:ira.node.gce-us-east1.admiral:27019 db:admin expDate:2019-01-13T15:36:40.167+0000 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.167+0000 D ASIO [UserCacheInvalidator] startCommand: RemoteCommand 77 -- target:ira.node.gce-us-east1.admiral:27019 db:admin expDate:2019-01-13T15:36:40.167+0000 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.167+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.167+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.167+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.167+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.203+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.203+0000 D ASIO [ShardRegistry] Request 77 finished with response: { cacheGeneration: ObjectId('5c002e8aad899acfb0bbfd1e'), ok: 1.0, operationTime: Timestamp(1547393770, 71), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393769, 703), t: 1 }, lastOpVisible: { ts: Timestamp(1547393769, 703), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393767, 896), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393769, 703), $clusterTime: { clusterTime: Timestamp(1547393770, 71), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.203+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cacheGeneration: ObjectId('5c002e8aad899acfb0bbfd1e'), ok: 1.0, operationTime: Timestamp(1547393770, 71), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393769, 703), t: 1 }, lastOpVisible: { ts: Timestamp(1547393769, 703), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393767, 896), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393769, 703), $clusterTime: { clusterTime: Timestamp(1547393770, 71), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.203+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.203+0000 I ASIO [ShardRegistry] Ending idle connection to host ira.node.gce-us-east1.admiral:27019 because the pool meets constraints; 1 connections to that host remain open Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.203+0000 D NETWORK [ShardRegistry] Cancelling outstanding I/O operations on connection to 10.142.15.204:27019 Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.259+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_central1 Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.259+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.260+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.260+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host camden.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "percy.node.gce-us-central1.admiral:27017", "umbra.node.gce-us-west1.admiral:27017", "camden.node.gce-us-central1.admiral:27017" ], arbiters: [ "desmond.node.gce-us-east1.admiral:27017", "flint.node.gce-us-west1.admiral:27017" ], setName: "sessions_gce_us_central1", setVersion: 6, ismaster: true, secondary: false, primary: "camden.node.gce-us-central1.admiral:27017", me: "camden.node.gce-us-central1.admiral:27017", electionId: ObjectId('7fffffff0000000000000004'), lastWrite: { opTime: { ts: Timestamp(1547393770, 290), t: 4 }, lastWriteDate: new Date(1547393770000), majorityOpTime: { ts: Timestamp(1547393770, 108), t: 4 }, majorityWriteDate: new Date(1547393770000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393770255), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393770, 290), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000004') }, lastCommittedOpTime: Timestamp(1547393770, 108), $configServerState: { opTime: { ts: Timestamp(1547393769, 703), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393770, 290), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.260+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating camden.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:36:10.000+0000 Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.260+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating camden.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393770, 290), t: 4 } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.260+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.300+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.300+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host umbra.node.gce-us-west1.admiral:27017 based on ismaster reply: { hosts: [ "percy.node.gce-us-central1.admiral:27017", "umbra.node.gce-us-west1.admiral:27017", "camden.node.gce-us-central1.admiral:27017" ], arbiters: [ "desmond.node.gce-us-east1.admiral:27017", "flint.node.gce-us-west1.admiral:27017" ], setName: "sessions_gce_us_central1", setVersion: 6, ismaster: false, secondary: true, primary: "camden.node.gce-us-central1.admiral:27017", me: "umbra.node.gce-us-west1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393770, 273), t: 4 }, lastWriteDate: new Date(1547393770000), majorityOpTime: { ts: Timestamp(1547393770, 60), t: 4 }, majorityWriteDate: new Date(1547393770000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393770275), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393770, 273), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393770, 60), $configServerState: { opTime: { ts: Timestamp(1547393769, 194), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393770, 280), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.300+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating umbra.node.gce-us-west1.admiral:27017 lastWriteDate to 2019-01-13T15:36:10.000+0000 Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.300+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating umbra.node.gce-us-west1.admiral:27017 opTime to { ts: Timestamp(1547393770, 273), t: 4 } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.300+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.302+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.302+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host percy.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "percy.node.gce-us-central1.admiral:27017", "umbra.node.gce-us-west1.admiral:27017", "camden.node.gce-us-central1.admiral:27017" ], arbiters: [ "desmond.node.gce-us-east1.admiral:27017", "flint.node.gce-us-west1.admiral:27017" ], setName: "sessions_gce_us_central1", setVersion: 6, ismaster: false, secondary: true, primary: "camden.node.gce-us-central1.admiral:27017", me: "percy.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393770, 374), t: 4 }, lastWriteDate: new Date(1547393770000), majorityOpTime: { ts: Timestamp(1547393770, 273), t: 4 }, majorityWriteDate: new Date(1547393770000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393770296), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393770, 374), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393770, 273), $configServerState: { opTime: { ts: Timestamp(1547393760, 14), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393770, 377), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.302+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating percy.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:36:10.000+0000 Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.302+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating percy.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393770, 374), t: 4 } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.302+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_central1 took 42 msec Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.896+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_west1 Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.896+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.935+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.936+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host tony.node.gce-us-west1.admiral:27017 based on ismaster reply: { hosts: [ "william.node.gce-us-west1.admiral:27017", "tony.node.gce-us-west1.admiral:27017", "chloe.node.gce-us-central1.admiral:27017" ], arbiters: [ "sarah.node.gce-us-east1.admiral:27017", "gerda.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_west1", setVersion: 82625, ismaster: true, secondary: false, primary: "tony.node.gce-us-west1.admiral:27017", me: "tony.node.gce-us-west1.admiral:27017", electionId: ObjectId('7fffffff000000000000001c'), lastWrite: { opTime: { ts: Timestamp(1547393770, 928), t: 28 }, lastWriteDate: new Date(1547393770000), majorityOpTime: { ts: Timestamp(1547393770, 851), t: 28 }, majorityWriteDate: new Date(1547393770000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393770911), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393770, 928), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff000000000000001c') }, lastCommittedOpTime: Timestamp(1547393770, 851), $configServerState: { opTime: { ts: Timestamp(1547393770, 764), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393770, 928), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.936+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating tony.node.gce-us-west1.admiral:27017 lastWriteDate to 2019-01-13T15:36:10.000+0000 Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.936+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating tony.node.gce-us-west1.admiral:27017 opTime to { ts: Timestamp(1547393770, 928), t: 28 } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.936+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.938+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.938+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host chloe.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "william.node.gce-us-west1.admiral:27017", "tony.node.gce-us-west1.admiral:27017", "chloe.node.gce-us-central1.admiral:27017" ], arbiters: [ "sarah.node.gce-us-east1.admiral:27017", "gerda.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_west1", setVersion: 82625, ismaster: false, secondary: true, primary: "tony.node.gce-us-west1.admiral:27017", me: "chloe.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393770, 878), t: 28 }, lastWriteDate: new Date(1547393770000), majorityOpTime: { ts: Timestamp(1547393770, 834), t: 28 }, majorityWriteDate: new Date(1547393770000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393770932), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393770, 878), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393770, 834), $configServerState: { opTime: { ts: Timestamp(1547393766, 101), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393770, 913), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.938+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating chloe.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:36:10.000+0000 Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.938+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating chloe.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393770, 878), t: 28 } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.938+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.978+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.978+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host william.node.gce-us-west1.admiral:27017 based on ismaster reply: { hosts: [ "william.node.gce-us-west1.admiral:27017", "tony.node.gce-us-west1.admiral:27017", "chloe.node.gce-us-central1.admiral:27017" ], arbiters: [ "sarah.node.gce-us-east1.admiral:27017", "gerda.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_west1", setVersion: 82625, ismaster: false, secondary: true, primary: "tony.node.gce-us-west1.admiral:27017", me: "william.node.gce-us-west1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393770, 972), t: 28 }, lastWriteDate: new Date(1547393770000), majorityOpTime: { ts: Timestamp(1547393770, 878), t: 28 }, majorityWriteDate: new Date(1547393770000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393770954), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393770, 972), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393770, 878), $configServerState: { opTime: { ts: Timestamp(1547393767, 896), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393770, 998), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.978+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating william.node.gce-us-west1.admiral:27017 lastWriteDate to 2019-01-13T15:36:10.000+0000 Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.978+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating william.node.gce-us-west1.admiral:27017 opTime to { ts: Timestamp(1547393770, 972), t: 28 } Jan 13 15:36:10 ivy mongos[27723]: 2019-01-13T15:36:10.978+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_west1 took 82 msec Jan 13 15:36:12 ivy mongos[27723]: 2019-01-13T15:36:12.043+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_europe_west1 Jan 13 15:36:12 ivy mongos[27723]: 2019-01-13T15:36:12.043+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:12 ivy mongos[27723]: 2019-01-13T15:36:12.143+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:12 ivy mongos[27723]: 2019-01-13T15:36:12.143+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host vivi.node.gce-europe-west1.admiral:27017 based on ismaster reply: { hosts: [ "vivi.node.gce-europe-west1.admiral:27017", "hilda.node.gce-europe-west2.admiral:27017" ], arbiters: [ "hubert.node.gce-europe-west3.admiral:27017" ], setName: "sessions_gce_europe_west1", setVersion: 4, ismaster: true, secondary: false, primary: "vivi.node.gce-europe-west1.admiral:27017", me: "vivi.node.gce-europe-west1.admiral:27017", electionId: ObjectId('7fffffff0000000000000009'), lastWrite: { opTime: { ts: Timestamp(1547393772, 27), t: 9 }, lastWriteDate: new Date(1547393772000), majorityOpTime: { ts: Timestamp(1547393772, 15), t: 9 }, majorityWriteDate: new Date(1547393772000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393772088), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393772, 27), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000009') }, lastCommittedOpTime: Timestamp(1547393772, 15), $configServerState: { opTime: { ts: Timestamp(1547393771, 750), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393772, 27), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:12 ivy mongos[27723]: 2019-01-13T15:36:12.143+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating vivi.node.gce-europe-west1.admiral:27017 lastWriteDate to 2019-01-13T15:36:12.000+0000 Jan 13 15:36:12 ivy mongos[27723]: 2019-01-13T15:36:12.143+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating vivi.node.gce-europe-west1.admiral:27017 opTime to { ts: Timestamp(1547393772, 27), t: 9 } Jan 13 15:36:12 ivy mongos[27723]: 2019-01-13T15:36:12.143+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:12 ivy mongos[27723]: 2019-01-13T15:36:12.238+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:12 ivy mongos[27723]: 2019-01-13T15:36:12.239+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host hilda.node.gce-europe-west2.admiral:27017 based on ismaster reply: { hosts: [ "vivi.node.gce-europe-west1.admiral:27017", "hilda.node.gce-europe-west2.admiral:27017" ], arbiters: [ "hubert.node.gce-europe-west3.admiral:27017" ], setName: "sessions_gce_europe_west1", setVersion: 4, ismaster: false, secondary: true, primary: "vivi.node.gce-europe-west1.admiral:27017", me: "hilda.node.gce-europe-west2.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393772, 71), t: 9 }, lastWriteDate: new Date(1547393772000), majorityOpTime: { ts: Timestamp(1547393772, 68), t: 9 }, majorityWriteDate: new Date(1547393772000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393772187), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393772, 71), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000008') }, lastCommittedOpTime: Timestamp(1547393772, 68), $configServerState: { opTime: { ts: Timestamp(1547393767, 541), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393772, 93), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:12 ivy mongos[27723]: 2019-01-13T15:36:12.239+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating hilda.node.gce-europe-west2.admiral:27017 lastWriteDate to 2019-01-13T15:36:12.000+0000 Jan 13 15:36:12 ivy mongos[27723]: 2019-01-13T15:36:12.239+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating hilda.node.gce-europe-west2.admiral:27017 opTime to { ts: Timestamp(1547393772, 71), t: 9 } Jan 13 15:36:12 ivy mongos[27723]: 2019-01-13T15:36:12.239+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_europe_west1 took 196 msec Jan 13 15:36:13 ivy mongos[27723]: 2019-01-13T15:36:13.185+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_europe_west2 Jan 13 15:36:13 ivy mongos[27723]: 2019-01-13T15:36:13.185+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:13 ivy mongos[27723]: 2019-01-13T15:36:13.281+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:13 ivy mongos[27723]: 2019-01-13T15:36:13.281+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host ignis.node.gce-europe-west2.admiral:27017 based on ismaster reply: { hosts: [ "ignis.node.gce-europe-west2.admiral:27017", "keith.node.gce-europe-west3.admiral:27017" ], arbiters: [ "francis.node.gce-europe-west1.admiral:27017" ], setName: "sessions_gce_europe_west2", setVersion: 6, ismaster: true, secondary: false, primary: "ignis.node.gce-europe-west2.admiral:27017", me: "ignis.node.gce-europe-west2.admiral:27017", electionId: ObjectId('7fffffff0000000000000004'), lastWrite: { opTime: { ts: Timestamp(1547393773, 154), t: 4 }, lastWriteDate: new Date(1547393773000), majorityOpTime: { ts: Timestamp(1547393773, 36), t: 4 }, majorityWriteDate: new Date(1547393773000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393773229), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393773, 154), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000004') }, lastCommittedOpTime: Timestamp(1547393773, 36), $configServerState: { opTime: { ts: Timestamp(1547393772, 725), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393773, 154), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:13 ivy mongos[27723]: 2019-01-13T15:36:13.281+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ignis.node.gce-europe-west2.admiral:27017 lastWriteDate to 2019-01-13T15:36:13.000+0000 Jan 13 15:36:13 ivy mongos[27723]: 2019-01-13T15:36:13.281+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ignis.node.gce-europe-west2.admiral:27017 opTime to { ts: Timestamp(1547393773, 154), t: 4 } Jan 13 15:36:13 ivy mongos[27723]: 2019-01-13T15:36:13.281+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:13 ivy mongos[27723]: 2019-01-13T15:36:13.387+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:13 ivy mongos[27723]: 2019-01-13T15:36:13.388+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host keith.node.gce-europe-west3.admiral:27017 based on ismaster reply: { hosts: [ "ignis.node.gce-europe-west2.admiral:27017", "keith.node.gce-europe-west3.admiral:27017" ], arbiters: [ "francis.node.gce-europe-west1.admiral:27017" ], setName: "sessions_gce_europe_west2", setVersion: 6, ismaster: false, secondary: true, primary: "ignis.node.gce-europe-west2.admiral:27017", me: "keith.node.gce-europe-west3.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393773, 187), t: 4 }, lastWriteDate: new Date(1547393773000), majorityOpTime: { ts: Timestamp(1547393773, 187), t: 4 }, majorityWriteDate: new Date(1547393773000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393773330), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393773, 187), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393773, 187), $configServerState: { opTime: { ts: Timestamp(1547393767, 78), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393773, 195), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:13 ivy mongos[27723]: 2019-01-13T15:36:13.388+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating keith.node.gce-europe-west3.admiral:27017 lastWriteDate to 2019-01-13T15:36:13.000+0000 Jan 13 15:36:13 ivy mongos[27723]: 2019-01-13T15:36:13.388+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating keith.node.gce-europe-west3.admiral:27017 opTime to { ts: Timestamp(1547393773, 187), t: 4 } Jan 13 15:36:13 ivy mongos[27723]: 2019-01-13T15:36:13.388+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_europe_west2 took 202 msec Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.223+0000 D SHARDING [conn38] Command begin db: admin msg id: 5 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.223+0000 D SHARDING [conn38] Command end db: admin msg id: 5 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.223+0000 I COMMAND [conn38] query admin.1 command: { buildInfo: "1", $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:1340 0ms Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.225+0000 D SHARDING [conn38] Command begin db: admin msg id: 7 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.225+0000 D NETWORK [conn38] Starting server-side compression negotiation Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.225+0000 D NETWORK [conn38] Compression negotiation not requested by client Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.225+0000 D SHARDING [conn38] Command end db: admin msg id: 7 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.225+0000 I COMMAND [conn38] command admin.$cmd command: isMaster { isMaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.227+0000 D SHARDING [conn38] Command begin db: admin msg id: 9 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.227+0000 D NETWORK [conn38] Starting server-side compression negotiation Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.227+0000 D NETWORK [conn38] Compression negotiation not requested by client Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.227+0000 D SHARDING [conn38] Command end db: admin msg id: 9 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.227+0000 I COMMAND [conn38] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.228+0000 D SHARDING [conn38] Command begin db: admin msg id: 11 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.228+0000 D SHARDING [conn38] Command end db: admin msg id: 11 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.228+0000 I COMMAND [conn38] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $db: "admin" } numYields:0 reslen:10255 protocol:op_query 0ms Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.230+0000 D SHARDING [conn38] Command begin db: config msg id: 13 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.230+0000 D EXECUTOR [conn38] Scheduling remote command request: RemoteCommand 78 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.230+0000 D ASIO [conn38] startCommand: RemoteCommand 78 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.230+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.230+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.230+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.230+0000 D NETWORK [conn38] Compressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.266+0000 D NETWORK [conn38] Decompressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.266+0000 D ASIO [conn38] Request 78 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393774, 105), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393774, 15), $clusterTime: { clusterTime: Timestamp(1547393774, 105), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.266+0000 D EXECUTOR [conn38] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393774, 105), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393774, 15), $clusterTime: { clusterTime: Timestamp(1547393774, 105), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.267+0000 D SHARDING [conn38] Command end db: config msg id: 13 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.267+0000 I COMMAND [conn38] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 36ms Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.267+0000 D SHARDING [conn38] Command begin db: config msg id: 15 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.267+0000 D TRACKING [conn38] Cmd: aggregate, TrackingId: 5c3b5aeea1824195fadc100a Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.267+0000 D EXECUTOR [conn38] Scheduling remote command request: RemoteCommand 79 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.267+0000 D ASIO [conn38] startCommand: RemoteCommand 79 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.267+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.267+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.267+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.267+0000 D NETWORK [conn38] Compressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.289+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_europe_west3 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.289+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.337+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.338+0000 D ASIO [ShardRegistry] Request 79 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393774, 105), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393774, 15), t: 1 }, lastOpVisible: { ts: Timestamp(1547393774, 15), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393767, 896), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393774, 15), $clusterTime: { clusterTime: Timestamp(1547393774, 266), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.338+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393774, 105), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393774, 15), t: 1 }, lastOpVisible: { ts: Timestamp(1547393774, 15), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393767, 896), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393774, 15), $clusterTime: { clusterTime: Timestamp(1547393774, 266), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.338+0000 D SHARDING [conn38] Command end db: config msg id: 15 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.338+0000 I COMMAND [conn38] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 70ms Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.338+0000 D SHARDING [conn38] Command begin db: config msg id: 17 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.338+0000 D EXECUTOR [conn38] Scheduling remote command request: RemoteCommand 80 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.338+0000 D ASIO [conn38] startCommand: RemoteCommand 80 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.338+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.338+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.338+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.338+0000 D NETWORK [conn38] Compressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.375+0000 D NETWORK [conn38] Decompressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.375+0000 D ASIO [conn38] Request 80 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393774, 105), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393774, 15), $clusterTime: { clusterTime: Timestamp(1547393774, 266), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.375+0000 D EXECUTOR [conn38] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393774, 105), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393774, 15), $clusterTime: { clusterTime: Timestamp(1547393774, 266), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.375+0000 D SHARDING [conn38] Command end db: config msg id: 17 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.375+0000 I COMMAND [conn38] query config.settings command: { find: "settings", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:315 36ms Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.375+0000 D SHARDING [conn38] Command begin db: config msg id: 19 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.375+0000 D TRACKING [conn38] Cmd: aggregate, TrackingId: 5c3b5aeea1824195fadc100d Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.375+0000 D EXECUTOR [conn38] Scheduling remote command request: RemoteCommand 81 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393174375) } } }, { $group: { _id: { note: "$details.note", event: "$what" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.375+0000 D ASIO [conn38] startCommand: RemoteCommand 81 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393174375) } } }, { $group: { _id: { note: "$details.note", event: "$what" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.376+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.376+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.376+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.376+0000 D NETWORK [conn38] Compressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.395+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.395+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host albert.node.gce-europe-west3.admiral:27017 based on ismaster reply: { hosts: [ "albert.node.gce-europe-west3.admiral:27017", "jordan.node.gce-europe-west1.admiral:27017" ], arbiters: [ "garry.node.gce-europe-west2.admiral:27017" ], setName: "sessions_gce_europe_west3", setVersion: 6, ismaster: true, secondary: false, primary: "albert.node.gce-europe-west3.admiral:27017", me: "albert.node.gce-europe-west3.admiral:27017", electionId: ObjectId('7fffffff000000000000000a'), lastWrite: { opTime: { ts: Timestamp(1547393774, 184), t: 10 }, lastWriteDate: new Date(1547393774000), majorityOpTime: { ts: Timestamp(1547393774, 148), t: 10 }, majorityWriteDate: new Date(1547393774000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393774337), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393774, 184), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff000000000000000a') }, lastCommittedOpTime: Timestamp(1547393774, 148), $configServerState: { opTime: { ts: Timestamp(1547393774, 15), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393774, 184), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.395+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating albert.node.gce-europe-west3.admiral:27017 lastWriteDate to 2019-01-13T15:36:14.000+0000 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.395+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating albert.node.gce-europe-west3.admiral:27017 opTime to { ts: Timestamp(1547393774, 184), t: 10 } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.395+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.427+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.428+0000 D ASIO [ShardRegistry] Request 81 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393774, 291), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393774, 15), t: 1 }, lastOpVisible: { ts: Timestamp(1547393774, 15), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393767, 896), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393774, 15), $clusterTime: { clusterTime: Timestamp(1547393774, 373), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.428+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393774, 291), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393774, 15), t: 1 }, lastOpVisible: { ts: Timestamp(1547393774, 15), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393767, 896), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393774, 15), $clusterTime: { clusterTime: Timestamp(1547393774, 373), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.428+0000 D SHARDING [conn38] Command end db: config msg id: 19 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.428+0000 I COMMAND [conn38] query config.changelog command: { aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393174375) } } }, { $group: { _id: { note: "$details.note", event: "$what" }, count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:245 52ms Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.428+0000 D SHARDING [conn38] Command begin db: config msg id: 21 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.428+0000 D EXECUTOR [conn38] Scheduling remote command request: RemoteCommand 82 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.428+0000 D ASIO [conn38] startCommand: RemoteCommand 82 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.428+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.428+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.428+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.428+0000 D NETWORK [conn38] Compressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.465+0000 D NETWORK [conn38] Decompressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.465+0000 D ASIO [conn38] Request 82 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393774, 291), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393774, 105), $clusterTime: { clusterTime: Timestamp(1547393774, 441), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.465+0000 D EXECUTOR [conn38] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393774, 291), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393774, 105), $clusterTime: { clusterTime: Timestamp(1547393774, 441), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.465+0000 D SHARDING [conn38] Command end db: config msg id: 21 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.465+0000 I COMMAND [conn38] query config.shards command: { find: "shards", filter: {}, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:1834 37ms Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.466+0000 D SHARDING [conn38] Command begin db: config msg id: 23 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.466+0000 D EXECUTOR [conn38] Scheduling remote command request: RemoteCommand 83 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.466+0000 D ASIO [conn38] startCommand: RemoteCommand 83 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.466+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.466+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.466+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.466+0000 D NETWORK [conn38] Compressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.496+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.496+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host jordan.node.gce-europe-west1.admiral:27017 based on ismaster reply: { hosts: [ "albert.node.gce-europe-west3.admiral:27017", "jordan.node.gce-europe-west1.admiral:27017" ], arbiters: [ "garry.node.gce-europe-west2.admiral:27017" ], setName: "sessions_gce_europe_west3", setVersion: 6, ismaster: false, secondary: true, primary: "albert.node.gce-europe-west3.admiral:27017", me: "jordan.node.gce-europe-west1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393774, 236), t: 10 }, lastWriteDate: new Date(1547393774000), majorityOpTime: { ts: Timestamp(1547393774, 236), t: 10 }, majorityWriteDate: new Date(1547393774000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393774441), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393774, 236), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000009') }, lastCommittedOpTime: Timestamp(1547393774, 236), $configServerState: { opTime: { ts: Timestamp(1547393767, 498), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393774, 266), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.496+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jordan.node.gce-europe-west1.admiral:27017 lastWriteDate to 2019-01-13T15:36:14.000+0000 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.496+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jordan.node.gce-europe-west1.admiral:27017 opTime to { ts: Timestamp(1547393774, 236), t: 10 } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.496+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_europe_west3 took 207 msec Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.502+0000 D NETWORK [conn38] Decompressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.502+0000 D ASIO [conn38] Request 83 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393774, 291), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393774, 105), $clusterTime: { clusterTime: Timestamp(1547393774, 444), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.502+0000 D EXECUTOR [conn38] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393774, 291), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393774, 105), $clusterTime: { clusterTime: Timestamp(1547393774, 444), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.503+0000 D SHARDING [conn38] Command end db: config msg id: 23 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.503+0000 I COMMAND [conn38] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 36ms Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.503+0000 D SHARDING [conn38] Command begin db: config msg id: 25 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.503+0000 D TRACKING [conn38] Cmd: aggregate, TrackingId: 5c3b5aeea1824195fadc1011 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.503+0000 D EXECUTOR [conn38] Scheduling remote command request: RemoteCommand 84 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.503+0000 D ASIO [conn38] startCommand: RemoteCommand 84 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.503+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.503+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.503+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.503+0000 D NETWORK [conn38] Compressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.590+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.590+0000 D ASIO [ShardRegistry] Request 84 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393774, 510), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393774, 291), t: 1 }, lastOpVisible: { ts: Timestamp(1547393774, 291), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393767, 896), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393774, 291), $clusterTime: { clusterTime: Timestamp(1547393774, 510), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.590+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393774, 510), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393774, 291), t: 1 }, lastOpVisible: { ts: Timestamp(1547393774, 291), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393767, 896), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393774, 291), $clusterTime: { clusterTime: Timestamp(1547393774, 510), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.590+0000 D SHARDING [conn38] Command end db: config msg id: 25 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.590+0000 I COMMAND [conn38] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 87ms Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.591+0000 D SHARDING [conn38] Command begin db: config msg id: 27 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.591+0000 D TRACKING [conn38] Cmd: aggregate, TrackingId: 5c3b5aeea1824195fadc1013 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.591+0000 D EXECUTOR [conn38] Scheduling remote command request: RemoteCommand 85 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.591+0000 D ASIO [conn38] startCommand: RemoteCommand 85 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.591+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.591+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.591+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.591+0000 D NETWORK [conn38] Compressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.625+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_east1_2 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.625+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.628+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.628+0000 D ASIO [ShardRegistry] Request 85 finished with response: { cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393774, 510), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393774, 291), t: 1 }, lastOpVisible: { ts: Timestamp(1547393774, 291), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393767, 896), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393774, 291), $clusterTime: { clusterTime: Timestamp(1547393774, 550), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.628+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393774, 510), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393774, 291), t: 1 }, lastOpVisible: { ts: Timestamp(1547393774, 291), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393767, 896), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393774, 291), $clusterTime: { clusterTime: Timestamp(1547393774, 550), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.628+0000 D SHARDING [conn38] Command end db: config msg id: 27 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.628+0000 I COMMAND [conn38] query config.databases command: { aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:270 36ms Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.630+0000 D SHARDING [conn38] Command begin db: config msg id: 29 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.630+0000 D EXECUTOR [conn38] Scheduling remote command request: RemoteCommand 86 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.630+0000 D ASIO [conn38] startCommand: RemoteCommand 86 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.630+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.630+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.630+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.630+0000 D NETWORK [conn38] Compressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.662+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.662+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host queen.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: true, secondary: false, primary: "queen.node.gce-us-east1.admiral:27017", me: "queen.node.gce-us-east1.admiral:27017", electionId: ObjectId('7fffffff0000000000000003'), lastWrite: { opTime: { ts: Timestamp(1547393774, 550), t: 3 }, lastWriteDate: new Date(1547393774000), majorityOpTime: { ts: Timestamp(1547393774, 513), t: 3 }, majorityWriteDate: new Date(1547393774000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393774642), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393774, 550), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000003') }, lastCommittedOpTime: Timestamp(1547393774, 513), $configServerState: { opTime: { ts: Timestamp(1547393774, 291), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393774, 557), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.662+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating queen.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:36:14.000+0000 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.662+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating queen.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393774, 550), t: 3 } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.662+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.666+0000 D NETWORK [conn38] Decompressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.667+0000 D ASIO [conn38] Request 86 finished with response: { n: 3, ok: 1.0, operationTime: Timestamp(1547393774, 510), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393774, 291), $clusterTime: { clusterTime: Timestamp(1547393774, 550), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.667+0000 D EXECUTOR [conn38] Received remote response: RemoteResponse -- cmd:{ n: 3, ok: 1.0, operationTime: Timestamp(1547393774, 510), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393774, 291), $clusterTime: { clusterTime: Timestamp(1547393774, 550), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.667+0000 D SHARDING [conn38] Command end db: config msg id: 29 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.667+0000 I COMMAND [conn38] query config.collections command: { count: "collections", query: { dropped: false }, $db: "config" } numYields:0 reslen:210 36ms Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.667+0000 D SHARDING [conn38] Command begin db: config msg id: 31 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.667+0000 D EXECUTOR [conn38] Scheduling remote command request: RemoteCommand 87 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393174667) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.667+0000 D ASIO [conn38] startCommand: RemoteCommand 87 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393174667) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.667+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.667+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.667+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.667+0000 D NETWORK [conn38] Compressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.700+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.701+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host april.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: false, secondary: true, primary: "queen.node.gce-us-east1.admiral:27017", me: "april.node.gce-us-east1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393774, 634), t: 3 }, lastWriteDate: new Date(1547393774000), majorityOpTime: { ts: Timestamp(1547393774, 537), t: 3 }, majorityWriteDate: new Date(1547393774000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393774677), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393774, 634), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393774, 537), $configServerState: { opTime: { ts: Timestamp(1547393774, 105), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393774, 642), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.701+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating april.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:36:14.000+0000 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.701+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating april.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393774, 634), t: 3 } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.701+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.702+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.702+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host ralph.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: false, secondary: true, primary: "queen.node.gce-us-east1.admiral:27017", me: "ralph.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393774, 593), t: 3 }, lastWriteDate: new Date(1547393774000), majorityOpTime: { ts: Timestamp(1547393774, 537), t: 3 }, majorityWriteDate: new Date(1547393774000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393774697), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393774, 593), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393774, 537), $configServerState: { opTime: { ts: Timestamp(1547393767, 498), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393774, 634), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.702+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ralph.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:36:14.000+0000 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.702+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ralph.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393774, 593), t: 3 } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.702+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_east1_2 took 77 msec Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.705+0000 D NETWORK [conn38] Decompressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.705+0000 D ASIO [conn38] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Request 87 finished with response: { cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393774176), up: 3486971, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393772508), up: 3433108, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393771743), up: 3486869, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393766724), up: 705, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393773922), up: 74720, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393769787), up: 74741, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393769720), up: 74715, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393767047), up: 74685, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393773717), up: 74691, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393774539), up: 74664, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.node.gce-us-east Jan 13 15:36:14 ivy mongos[27723]: 1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393772006), up: 74634, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393766312), up: 74656, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393770758), up: 74633, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393764937), up: 74601, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393767948), up: 74604, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393766589), up: 74548, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393770471), up: 74581, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393774341), up: 74585, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393765450), up: 74547, waiting: true }, { _id: "jacob:270 .......... 75155, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393768290), up: 75119, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393766802), up: 75154, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393767760), up: 75913, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", Jan 13 15:36:14 ivy mongos[27723]: "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393773973), up: 75979, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393773576), up: 75980, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393772492), up: 75918, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393765260), up: 76500, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393764696), up: 76500, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393771762), up: 76447, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393771744), up: 76297, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393771763), up: 76447, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393771747), up: 76235, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393765725), up: 76291, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393771748), up: 76235, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393771747), up: 76109, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393773356), up: 76174, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393773457) Jan 13 15:36:14 ivy mongos[27723]: , up: 76175, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393751657), up: 41, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393771747), up: 76048, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393770131), up: 76108, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393774, 510), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393774, 291), $clusterTime: { clusterTime: Timestamp(1547393774, 628), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.705+0000 D EXECUTOR [conn38] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393774176), up: 3486971, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393772508), up: 3433108, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393771743), up: 3486869, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393766724), up: 705, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393773922), up: 74720, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393769787), up: 74741, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393769720), up: 74715, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393767047), up: 74685, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393773717), up: 74691, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393774539), up: 74664, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.no Jan 13 15:36:14 ivy mongos[27723]: de.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393772006), up: 74634, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393766312), up: 74656, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393770758), up: 74633, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393764937), up: 74601, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393767948), up: 74604, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393766589), up: 74548, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393770471), up: 74581, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393774341), up: 74585, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393765450), up: 74547, waiting: true }, { _ .......... 75155, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393768290), up: 75119, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393766802), up: 75154, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393767760), up: 75913, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", Jan 13 15:36:14 ivy mongos[27723]: "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393773973), up: 75979, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393773576), up: 75980, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393772492), up: 75918, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393765260), up: 76500, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393764696), up: 76500, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393771762), up: 76447, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393771744), up: 76297, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393771763), up: 76447, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393771747), up: 76235, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393765725), up: 76291, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393771748), up: 76235, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393771747), up: 76109, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393773356), up: 76174, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393773457) Jan 13 15:36:14 ivy mongos[27723]: , up: 76175, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393751657), up: 41, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393771747), up: 76048, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393770131), up: 76108, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393774, 510), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393774, 291), $clusterTime: { clusterTime: Timestamp(1547393774, 628), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.705+0000 D SHARDING [conn38] Command end db: config msg id: 31 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.705+0000 I COMMAND [conn38] query config.mongos command: { find: "mongos", filter: { ping: { $gte: new Date(1547393174667) } }, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:63 reslen:9894 38ms Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.706+0000 D SHARDING [conn38] Command begin db: config msg id: 33 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.706+0000 D EXECUTOR [conn38] Scheduling remote command request: RemoteCommand 88 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.706+0000 D ASIO [conn38] startCommand: RemoteCommand 88 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.706+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.706+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.706+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.706+0000 D NETWORK [conn38] Compressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.743+0000 D NETWORK [conn38] Decompressing message with snappy Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.743+0000 D ASIO [conn38] Request 88 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393774, 510), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393774, 510), $clusterTime: { clusterTime: Timestamp(1547393774, 628), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.743+0000 D EXECUTOR [conn38] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393774, 510), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393774, 510), $clusterTime: { clusterTime: Timestamp(1547393774, 628), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.743+0000 D SHARDING [conn38] Command end db: config msg id: 33 Jan 13 15:36:14 ivy mongos[27723]: 2019-01-13T15:36:14.743+0000 I COMMAND [conn38] query config.locks command: { find: "locks", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:241 37ms Jan 13 15:36:16 ivy mongos[27723]: 2019-01-13T15:36:16.828+0000 D TRACKING [Uptime reporter] Cmd: NotSet, TrackingId: 5c3b5af0a1824195fadc1018 Jan 13 15:36:16 ivy mongos[27723]: 2019-01-13T15:36:16.828+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 89 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:46.828+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393776827), up: 66, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:36:16 ivy mongos[27723]: 2019-01-13T15:36:16.828+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 89 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:46.828+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393776827), up: 66, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:36:16 ivy mongos[27723]: 2019-01-13T15:36:16.828+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:36:16 ivy mongos[27723]: 2019-01-13T15:36:16.829+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:16 ivy mongos[27723]: 2019-01-13T15:36:16.829+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:16 ivy mongos[27723]: 2019-01-13T15:36:16.829+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.062+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.062+0000 D ASIO [ShardRegistry] Request 89 finished with response: { n: 1, nModified: 1, opTime: { ts: Timestamp(1547393776, 895), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393776, 895), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393776, 895), t: 1 }, lastOpVisible: { ts: Timestamp(1547393776, 895), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393776, 895), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393776, 895), $clusterTime: { clusterTime: Timestamp(1547393776, 1023), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.062+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ n: 1, nModified: 1, opTime: { ts: Timestamp(1547393776, 895), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393776, 895), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393776, 895), t: 1 }, lastOpVisible: { ts: Timestamp(1547393776, 895), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393776, 895), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393776, 895), $clusterTime: { clusterTime: Timestamp(1547393776, 1023), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.062+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.063+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 90 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:47.063+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393776, 895), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.063+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 90 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:47.063+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393776, 895), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.063+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.063+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.063+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.063+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.101+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.101+0000 D ASIO [ShardRegistry] Request 90 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393776, 896), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393776, 895), t: 1 }, lastOpVisible: { ts: Timestamp(1547393776, 895), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393776, 895), $clusterTime: { clusterTime: Timestamp(1547393776, 1023), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.101+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393776, 896), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393776, 895), t: 1 }, lastOpVisible: { ts: Timestamp(1547393776, 895), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393776, 895), $clusterTime: { clusterTime: Timestamp(1547393776, 1023), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.101+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.101+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 91 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:47.101+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393776, 895), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.101+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 91 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:47.101+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393776, 895), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.101+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.101+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.101+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.101+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.138+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.138+0000 D ASIO [ShardRegistry] Request 91 finished with response: { cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393776, 896), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393776, 896), t: 1 }, lastOpVisible: { ts: Timestamp(1547393776, 896), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393776, 896), $clusterTime: { clusterTime: Timestamp(1547393777, 77), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.138+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393776, 896), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393776, 896), t: 1 }, lastOpVisible: { ts: Timestamp(1547393776, 896), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393776, 896), $clusterTime: { clusterTime: Timestamp(1547393777, 77), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.138+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.139+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 92 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:47.139+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393776, 896), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.139+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 92 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:36:47.139+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393776, 896), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.139+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.139+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.139+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.139+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.175+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.175+0000 D ASIO [ShardRegistry] Request 92 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393777, 77), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393776, 896), t: 1 }, lastOpVisible: { ts: Timestamp(1547393776, 896), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393776, 896), $clusterTime: { clusterTime: Timestamp(1547393777, 77), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.175+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393777, 77), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393776, 896), t: 1 }, lastOpVisible: { ts: Timestamp(1547393776, 896), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393776, 896), $clusterTime: { clusterTime: Timestamp(1547393777, 77), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:36:17 ivy mongos[27723]: 2019-01-13T15:36:17.175+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:36:19 ivy mongos[27723]: 2019-01-13T15:36:19.383+0000 D NETWORK [TaskExecutorPool-0] Compressing message with snappy Jan 13 15:36:19 ivy mongos[27723]: 2019-01-13T15:36:19.420+0000 D NETWORK [TaskExecutorPool-0] Decompressing message with snappy Jan 13 15:36:19 ivy mongos[27723]: 2019-01-13T15:36:19.420+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.515+0000 D TRACKING [replSetDistLockPinger] Cmd: NotSet, TrackingId: 5c3b5b1da1824195fadc101d Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.515+0000 D EXECUTOR [replSetDistLockPinger] Scheduling remote command request: RemoteCommand 94 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:37:31.515+0000 cmd:{ findAndModify: "lockpings", query: { _id: "ivy:27018:1547393707:-6945163188777852108" }, update: { $set: { ping: new Date(1547393821515) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.515+0000 D ASIO [replSetDistLockPinger] startCommand: RemoteCommand 94 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:37:31.515+0000 cmd:{ findAndModify: "lockpings", query: { _id: "ivy:27018:1547393707:-6945163188777852108" }, update: { $set: { ping: new Date(1547393821515) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.515+0000 D SHARDING [shard registry reload] Reloading shardRegistry Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.515+0000 D TRACKING [shard registry reload] Cmd: NotSet, TrackingId: 5c3b5b1da1824195fadc101f Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.515+0000 D EXECUTOR [shard registry reload] Scheduling remote command request: RemoteCommand 95 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:37:31.515+0000 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393776, 896), t: 1 } }, maxTimeMS: 30000 } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.515+0000 D ASIO [shard registry reload] startCommand: RemoteCommand 95 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:37:31.515+0000 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393776, 896), t: 1 } }, maxTimeMS: 30000 } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.515+0000 I NETWORK [listener] connection accepted from 127.0.0.1:28615 #39 (4 connections now open) Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.515+0000 D EXECUTOR [listener] Starting new executor thread in passthrough mode Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.515+0000 I NETWORK [listener] connection accepted from 127.0.0.1:28681 #40 (5 connections now open) Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.515+0000 D EXECUTOR [listener] Starting new executor thread in passthrough mode Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.515+0000 I NETWORK [listener] connection accepted from 127.0.0.1:28735 #41 (6 connections now open) Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.515+0000 D EXECUTOR [listener] Starting new executor thread in passthrough mode Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.515+0000 I NETWORK [listener] connection accepted from 127.0.0.1:28803 #42 (7 connections now open) Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.515+0000 D EXECUTOR [listener] Starting new executor thread in passthrough mode Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.523+0000 D SHARDING [conn30] Command begin db: admin msg id: 45 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.523+0000 D NETWORK [conn30] Starting server-side compression negotiation Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.523+0000 D NETWORK [conn30] Compression negotiation not requested by client Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.523+0000 D SHARDING [conn30] Command end db: admin msg id: 45 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.523+0000 I COMMAND [conn30] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.523+0000 D TRACKING [UserCacheInvalidator] Cmd: NotSet, TrackingId: 5c3b5b1da1824195fadc1022 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.525+0000 D EXECUTOR [UserCacheInvalidator] Scheduling remote command request: RemoteCommand 96 -- target:ira.node.gce-us-east1.admiral:27019 db:admin expDate:2019-01-13T15:37:31.525+0000 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.525+0000 D ASIO [UserCacheInvalidator] startCommand: RemoteCommand 96 -- target:ira.node.gce-us-east1.admiral:27019 db:admin expDate:2019-01-13T15:37:31.525+0000 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.526+0000 D TRACKING [Uptime reporter] Cmd: NotSet, TrackingId: 5c3b5b1da1824195fadc1024 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.526+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 97 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:37:31.526+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393821525), up: 111, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.526+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 97 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:37:31.526+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393821525), up: 111, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.526+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_config Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.526+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.526+0000 D NETWORK [conn30] Session from 127.0.0.1:27567 encountered a network error during SourceMessage Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.526+0000 I NETWORK [conn30] end connection 127.0.0.1:27567 (6 connections now open) Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.526+0000 D NETWORK [conn30] Cancelling outstanding I/O operations on connection to 127.0.0.1:27567 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.526+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.526+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.526+0000 I ASIO [ShardRegistry] Connecting to ira.node.gce-us-east1.admiral:27019 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.526+0000 D ASIO [ShardRegistry] Finished connection setup. Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.526+0000 I ASIO [ShardRegistry] Connecting to ira.node.gce-us-east1.admiral:27019 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.526+0000 D ASIO [ShardRegistry] Finished connection setup. Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.526+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.526+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.526+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.526+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.526+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.526+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.531+0000 D SHARDING [conn38] Command begin db: admin msg id: 35 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.531+0000 D SHARDING [conn38] Command end db: admin msg id: 35 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.531+0000 I COMMAND [conn38] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:178 protocol:op_query 0ms Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.532+0000 D SHARDING [conn39] Command begin db: admin msg id: 1 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.532+0000 D SHARDING [conn39] Command end db: admin msg id: 1 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.532+0000 I COMMAND [conn39] command admin.$cmd command: getnonce { getnonce: 1, $db: "admin" } numYields:0 reslen:206 protocol:op_query 0ms Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.532+0000 D SHARDING [conn40] Command begin db: admin msg id: 1 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.533+0000 D SHARDING [conn41] Command begin db: admin msg id: 1 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.533+0000 D SHARDING [conn41] Command end db: admin msg id: 1 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.533+0000 I COMMAND [conn41] command admin.$cmd command: getnonce { getnonce: 1, $db: "admin" } numYields:0 reslen:206 protocol:op_query 0ms Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.533+0000 D SHARDING [conn42] Command begin db: admin msg id: 1 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.533+0000 D SHARDING [conn42] Command end db: admin msg id: 1 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.533+0000 I COMMAND [conn42] command admin.$cmd command: getnonce { getnonce: 1, $db: "admin" } numYields:0 reslen:206 protocol:op_query 0ms Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.534+0000 D SHARDING [conn40] Command end db: admin msg id: 1 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.534+0000 I COMMAND [conn40] command admin.$cmd command: getnonce { getnonce: 1, $db: "admin" } numYields:0 reslen:206 protocol:op_query 1ms Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.534+0000 D SHARDING [conn41] Command begin db: admin msg id: 3 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.535+0000 D NETWORK [conn38] Session from 127.0.0.1:28185 encountered a network error during SourceMessage Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.535+0000 I NETWORK [conn38] end connection 127.0.0.1:28185 (5 connections now open) Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.535+0000 D NETWORK [conn38] Cancelling outstanding I/O operations on connection to 127.0.0.1:28185 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.536+0000 D SHARDING [conn40] Command begin db: admin msg id: 3 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.536+0000 D NETWORK [conn40] Starting server-side compression negotiation Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.536+0000 D NETWORK [conn40] Compression negotiation not requested by client Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.536+0000 D SHARDING [conn40] Command end db: admin msg id: 3 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.536+0000 I COMMAND [conn40] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.536+0000 I NETWORK [conn40] Error sending response to client: SocketException: Broken pipe. Ending connection from 127.0.0.1:28681 (connection id: 40) Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.536+0000 I NETWORK [conn40] end connection 127.0.0.1:28681 (4 connections now open) Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.536+0000 D NETWORK [conn40] Cancelling outstanding I/O operations on connection to 127.0.0.1:28681 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.537+0000 D SHARDING [conn39] Command begin db: admin msg id: 3 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.537+0000 D NETWORK [conn39] Starting server-side compression negotiation Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.537+0000 D NETWORK [conn39] Compression negotiation not requested by client Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.537+0000 D SHARDING [conn39] Command end db: admin msg id: 3 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.537+0000 I COMMAND [conn39] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.537+0000 I NETWORK [conn39] Error sending response to client: SocketException: Broken pipe. Ending connection from 127.0.0.1:28615 (connection id: 39) Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.537+0000 I NETWORK [conn39] end connection 127.0.0.1:28615 (3 connections now open) Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.537+0000 D NETWORK [conn39] Cancelling outstanding I/O operations on connection to 127.0.0.1:28615 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.537+0000 D NETWORK [conn41] Starting server-side compression negotiation Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.537+0000 D NETWORK [conn41] Compression negotiation not requested by client Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.537+0000 D SHARDING [conn41] Command end db: admin msg id: 3 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.537+0000 I COMMAND [conn41] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 2ms Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.537+0000 I NETWORK [conn41] Error sending response to client: SocketException: Broken pipe. Ending connection from 127.0.0.1:28735 (connection id: 41) Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.537+0000 I NETWORK [conn41] end connection 127.0.0.1:28735 (2 connections now open) Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.537+0000 D NETWORK [conn41] Cancelling outstanding I/O operations on connection to 127.0.0.1:28735 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.538+0000 D SHARDING [conn42] Command begin db: admin msg id: 3 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.538+0000 D NETWORK [conn42] Starting server-side compression negotiation Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.538+0000 D NETWORK [conn42] Compression negotiation not requested by client Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.538+0000 D SHARDING [conn42] Command end db: admin msg id: 3 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.538+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.539+0000 D SHARDING [conn42] Command begin db: admin msg id: 5 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.539+0000 D SHARDING [conn42] Command end db: admin msg id: 5 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.539+0000 I COMMAND [conn42] query admin.1 command: { buildInfo: "1", $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:1340 0ms Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.540+0000 D SHARDING [conn42] Command begin db: admin msg id: 7 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.541+0000 D NETWORK [conn42] Starting server-side compression negotiation Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.541+0000 D NETWORK [conn42] Compression negotiation not requested by client Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.541+0000 D SHARDING [conn42] Command end db: admin msg id: 7 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.541+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.543+0000 D SHARDING [conn42] Command begin db: admin msg id: 9 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.544+0000 D SHARDING [conn42] Command end db: admin msg id: 9 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.544+0000 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $db: "admin" } numYields:0 reslen:10255 protocol:op_query 0ms Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.545+0000 D SHARDING [conn42] Command begin db: config msg id: 11 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.546+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 98 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.546+0000 D ASIO [conn42] startCommand: RemoteCommand 98 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.546+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.546+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.546+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.546+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.564+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.564+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host ira.node.gce-us-east1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: true, secondary: false, primary: "ira.node.gce-us-east1.admiral:27019", me: "ira.node.gce-us-east1.admiral:27019", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1547393821, 438), t: 1 }, lastWriteDate: new Date(1547393821000), majorityOpTime: { ts: Timestamp(1547393821, 328), t: 1 }, majorityWriteDate: new Date(1547393821000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393821543), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393821, 438), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 328), $clusterTime: { clusterTime: Timestamp(1547393821, 438), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.564+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ira.node.gce-us-east1.admiral:27019 lastWriteDate to 2019-01-13T15:37:01.000+0000 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.564+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ira.node.gce-us-east1.admiral:27019 opTime to { ts: Timestamp(1547393821, 438), t: 1 } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.564+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.565+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.565+0000 D ASIO [ShardRegistry] Request 95 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393821, 328), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393821, 328), t: 1 }, lastOpVisible: { ts: Timestamp(1547393821, 328), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, la Jan 13 15:37:01 ivy mongos[27723]: stCommittedOpTime: Timestamp(1547393821, 328), $clusterTime: { clusterTime: Timestamp(1547393821, 422), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.565+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393821, 328), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393821, 328), t: 1 }, lastOpVisible: { ts: Timestamp(1547393821, 328), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000 Jan 13 15:37:01 ivy mongos[27723]: 000000') }, lastCommittedOpTime: Timestamp(1547393821, 328), $clusterTime: { clusterTime: Timestamp(1547393821, 422), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.565+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.565+0000 D SHARDING [shard registry reload] found 7 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1547393821, 328), t: 1 } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.565+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.565+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_east1, with CS sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.565+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.565+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_central1, with CS sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.565+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.565+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_west1, with CS sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.565+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.565+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_europe_west1, with CS sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.565+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.565+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_europe_west2, with CS sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.565+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.565+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_europe_west3, with CS sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.565+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.565+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_east1_2, with CS sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.565+0000 D SHARDING [shard registry reload] Adding shard config, with CS sessions_config/ira.node.gce-us-east1.admiral:27019,jasper.node.gce-us-west1.admiral:27019,kratos.node.gce-europe-west3.admiral:27019,leon.node.gce-us-east1.admiral:27019,mateo.node.gce-us-west1.admiral:27019,newton.node.gce-europe-west3.admiral:27019 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.582+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.582+0000 D ASIO [conn42] Request 98 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393821, 438), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 328), $clusterTime: { clusterTime: Timestamp(1547393821, 438), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.582+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393821, 438), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 328), $clusterTime: { clusterTime: Timestamp(1547393821, 438), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.582+0000 D SHARDING [conn42] Command end db: config msg id: 11 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.582+0000 I COMMAND [conn42] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 36ms Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.583+0000 D SHARDING [conn42] Command begin db: config msg id: 13 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.583+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b1da1824195fadc1033 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.583+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 99 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.583+0000 D ASIO [conn42] startCommand: RemoteCommand 99 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.601+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.601+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host leon.node.gce-us-east1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "leon.node.gce-us-east1.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393821, 438), t: 1 }, lastWriteDate: new Date(1547393821000), majorityOpTime: { ts: Timestamp(1547393821, 328), t: 1 }, majorityWriteDate: new Date(1547393821000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393821578), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393821, 438), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393821, 328), $clusterTime: { clusterTime: Timestamp(1547393821, 438), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.601+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating leon.node.gce-us-east1.admiral:27019 lastWriteDate to 2019-01-13T15:37:01.000+0000 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.601+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating leon.node.gce-us-east1.admiral:27019 opTime to { ts: Timestamp(1547393821, 438), t: 1 } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.601+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.630+0000 D NETWORK [ShardRegistry] Starting client-side compression negotiation Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.630+0000 D NETWORK [ShardRegistry] Offering snappy compressor to server Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.630+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.672+0000 D NETWORK [ShardRegistry] Finishing client-side compression negotiation Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.672+0000 D NETWORK [ShardRegistry] Received message compressors from server Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.672+0000 D NETWORK [ShardRegistry] Adding compressor snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.672+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.672+0000 I ASIO [ShardRegistry] Connecting to ira.node.gce-us-east1.admiral:27019 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.672+0000 D ASIO [ShardRegistry] Finished connection setup. Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.672+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.672+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.686+0000 D NETWORK [ShardRegistry] Starting client-side compression negotiation Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.686+0000 D NETWORK [ShardRegistry] Offering snappy compressor to server Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.686+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.707+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.708+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host kratos.node.gce-europe-west3.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "kratos.node.gce-europe-west3.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393821, 438), t: 1 }, lastWriteDate: new Date(1547393821000), majorityOpTime: { ts: Timestamp(1547393821, 328), t: 1 }, majorityWriteDate: new Date(1547393821000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393821649), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393821, 438), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393821, 328), $clusterTime: { clusterTime: Timestamp(1547393821, 621), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.708+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating kratos.node.gce-europe-west3.admiral:27019 lastWriteDate to 2019-01-13T15:37:01.000+0000 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.708+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating kratos.node.gce-europe-west3.admiral:27019 opTime to { ts: Timestamp(1547393821, 438), t: 1 } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.708+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.708+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.709+0000 D ASIO [ShardRegistry] Request 96 finished with response: { cacheGeneration: ObjectId('5c002e8aad899acfb0bbfd1e'), ok: 1.0, operationTime: Timestamp(1547393821, 438), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393821, 328), t: 1 }, lastOpVisible: { ts: Timestamp(1547393821, 328), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 328), $clusterTime: { clusterTime: Timestamp(1547393821, 634), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.709+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cacheGeneration: ObjectId('5c002e8aad899acfb0bbfd1e'), ok: 1.0, operationTime: Timestamp(1547393821, 438), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393821, 328), t: 1 }, lastOpVisible: { ts: Timestamp(1547393821, 328), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 328), $clusterTime: { clusterTime: Timestamp(1547393821, 634), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.709+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.709+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.709+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.709+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.714+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.714+0000 D ASIO [ShardRegistry] Request 94 finished with response: { lastErrorObject: { n: 1, updatedExisting: true }, value: { _id: "ivy:27018:1547393707:-6945163188777852108", ping: new Date(1547393767976) }, ok: 1.0, operationTime: Timestamp(1547393821, 438), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393821, 438), t: 1 }, lastOpVisible: { ts: Timestamp(1547393821, 438), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393821, 438), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 438), $clusterTime: { clusterTime: Timestamp(1547393821, 634), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.714+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ lastErrorObject: { n: 1, updatedExisting: true }, value: { _id: "ivy:27018:1547393707:-6945163188777852108", ping: new Date(1547393767976) }, ok: 1.0, operationTime: Timestamp(1547393821, 438), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393821, 438), t: 1 }, lastOpVisible: { ts: Timestamp(1547393821, 438), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393821, 438), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 438), $clusterTime: { clusterTime: Timestamp(1547393821, 634), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.714+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.714+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.714+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.714+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.727+0000 D NETWORK [ShardRegistry] Finishing client-side compression negotiation Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.727+0000 D NETWORK [ShardRegistry] Received message compressors from server Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.727+0000 D NETWORK [ShardRegistry] Adding compressor snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.728+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.747+0000 D NETWORK [ShardRegistry] Starting client-side compression negotiation Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.747+0000 D NETWORK [ShardRegistry] Offering snappy compressor to server Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.747+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.748+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.748+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host mateo.node.gce-us-west1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "mateo.node.gce-us-west1.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393821, 438), t: 1 }, lastWriteDate: new Date(1547393821000), majorityOpTime: { ts: Timestamp(1547393821, 438), t: 1 }, majorityWriteDate: new Date(1547393821000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393821723), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393821, 438), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393821, 438), $clusterTime: { clusterTime: Timestamp(1547393821, 634), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.748+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating mateo.node.gce-us-west1.admiral:27019 lastWriteDate to 2019-01-13T15:37:01.000+0000 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.748+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating mateo.node.gce-us-west1.admiral:27019 opTime to { ts: Timestamp(1547393821, 438), t: 1 } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.748+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.785+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.786+0000 D ASIO [ShardRegistry] Request 99 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393821, 654), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393821, 438), t: 1 }, lastOpVisible: { ts: Timestamp(1547393821, 438), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393821, 438), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 438), $clusterTime: { clusterTime: Timestamp(1547393821, 654), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.786+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393821, 654), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393821, 438), t: 1 }, lastOpVisible: { ts: Timestamp(1547393821, 438), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393821, 438), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 438), $clusterTime: { clusterTime: Timestamp(1547393821, 654), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.786+0000 D SHARDING [conn42] Command end db: config msg id: 13 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.786+0000 I COMMAND [conn42] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 202ms Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.786+0000 D SHARDING [conn42] Command begin db: config msg id: 15 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.786+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 100 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.786+0000 D ASIO [conn42] startCommand: RemoteCommand 100 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.786+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.786+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.786+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.786+0000 D NETWORK [ShardRegistry] Finishing client-side compression negotiation Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.786+0000 D NETWORK [ShardRegistry] Received message compressors from server Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.786+0000 D NETWORK [ShardRegistry] Adding compressor snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.786+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.786+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.791+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.791+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host jasper.node.gce-us-west1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "jasper.node.gce-us-west1.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393821, 438), t: 1 }, lastWriteDate: new Date(1547393821000), majorityOpTime: { ts: Timestamp(1547393821, 438), t: 1 }, majorityWriteDate: new Date(1547393821000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393821765), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393821, 438), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393821, 438), $clusterTime: { clusterTime: Timestamp(1547393821, 634), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.791+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jasper.node.gce-us-west1.admiral:27019 lastWriteDate to 2019-01-13T15:37:01.000+0000 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.791+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jasper.node.gce-us-west1.admiral:27019 opTime to { ts: Timestamp(1547393821, 438), t: 1 } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.791+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.823+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.823+0000 D ASIO [conn42] Request 100 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393821, 695), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 438), $clusterTime: { clusterTime: Timestamp(1547393821, 695), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.823+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393821, 695), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 438), $clusterTime: { clusterTime: Timestamp(1547393821, 695), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.823+0000 D SHARDING [conn42] Command end db: config msg id: 15 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.823+0000 I COMMAND [conn42] query config.settings command: { find: "settings", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:315 37ms Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.824+0000 D SHARDING [conn42] Command begin db: config msg id: 17 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.824+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b1da1824195fadc1036 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.824+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 101 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393221824) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.824+0000 D ASIO [conn42] startCommand: RemoteCommand 101 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393221824) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.824+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.824+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.824+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.877+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.877+0000 D ASIO [ShardRegistry] Request 101 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393821, 695), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393821, 438), t: 1 }, lastOpVisible: { ts: Timestamp(1547393821, 438), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 438), $clusterTime: { clusterTime: Timestamp(1547393821, 731), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.877+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393821, 695), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393821, 438), t: 1 }, lastOpVisible: { ts: Timestamp(1547393821, 438), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 438), $clusterTime: { clusterTime: Timestamp(1547393821, 731), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.878+0000 D SHARDING [conn42] Command end db: config msg id: 17 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.878+0000 I COMMAND [conn42] query config.changelog command: { aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393221824) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:245 53ms Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.878+0000 D SHARDING [conn42] Command begin db: config msg id: 19 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.878+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 102 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.878+0000 D ASIO [conn42] startCommand: RemoteCommand 102 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.878+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.878+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.878+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.878+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.898+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.898+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host newton.node.gce-europe-west3.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "newton.node.gce-europe-west3.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393821, 695), t: 1 }, lastWriteDate: new Date(1547393821000), majorityOpTime: { ts: Timestamp(1547393821, 438), t: 1 }, majorityWriteDate: new Date(1547393821000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393821840), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393821, 695), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393821, 438), $clusterTime: { clusterTime: Timestamp(1547393821, 695), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.898+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating newton.node.gce-europe-west3.admiral:27019 lastWriteDate to 2019-01-13T15:37:01.000+0000 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.898+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating newton.node.gce-europe-west3.admiral:27019 opTime to { ts: Timestamp(1547393821, 695), t: 1 } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.898+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_config took 372 msec Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.898+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_east1 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.898+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.915+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.915+0000 D ASIO [conn42] Request 102 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393821, 695), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 438), $clusterTime: { clusterTime: Timestamp(1547393821, 731), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.915+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393821, 695), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 438), $clusterTime: { clusterTime: Timestamp(1547393821, 731), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.915+0000 D SHARDING [conn42] Command end db: config msg id: 19 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.915+0000 I COMMAND [conn42] query config.shards command: { find: "shards", filter: {}, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:1834 37ms Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.916+0000 D SHARDING [conn42] Command begin db: config msg id: 21 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.916+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 103 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.916+0000 D ASIO [conn42] startCommand: RemoteCommand 103 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.916+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.916+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.916+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.916+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.923+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.923+0000 D ASIO [ShardRegistry] Request 97 finished with response: { n: 1, nModified: 1, opTime: { ts: Timestamp(1547393821, 654), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393821, 695), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393821, 654), t: 1 }, lastOpVisible: { ts: Timestamp(1547393821, 654), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393821, 654), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 654), $clusterTime: { clusterTime: Timestamp(1547393821, 746), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.923+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ n: 1, nModified: 1, opTime: { ts: Timestamp(1547393821, 654), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393821, 695), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393821, 654), t: 1 }, lastOpVisible: { ts: Timestamp(1547393821, 654), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393821, 654), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 654), $clusterTime: { clusterTime: Timestamp(1547393821, 746), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.923+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.923+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 104 -- target:newton.node.gce-europe-west3.admiral:27019 db:config expDate:2019-01-13T15:37:31.923+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393821, 654), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.923+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 104 -- target:newton.node.gce-europe-west3.admiral:27019 db:config expDate:2019-01-13T15:37:31.923+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393821, 654), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.923+0000 I ASIO [ShardRegistry] Connecting to newton.node.gce-europe-west3.admiral:27019 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.923+0000 D ASIO [ShardRegistry] Finished connection setup. Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.937+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.937+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host phil.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "zeta.node.gce-us-east1.admiral:27017", "phil.node.gce-us-east1.admiral:27017", "bambi.node.gce-us-central1.admiral:27017" ], arbiters: [ "elrond.node.gce-us-west1.admiral:27017", "dale.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1", setVersion: 19, ismaster: true, secondary: false, primary: "phil.node.gce-us-east1.admiral:27017", me: "phil.node.gce-us-east1.admiral:27017", electionId: ObjectId('7fffffff0000000000000016'), lastWrite: { opTime: { ts: Timestamp(1547393821, 863), t: 22 }, lastWriteDate: new Date(1547393821000), majorityOpTime: { ts: Timestamp(1547393821, 740), t: 22 }, majorityWriteDate: new Date(1547393821000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393821913), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393821, 863), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000016') }, lastCommittedOpTime: Timestamp(1547393821, 740), $configServerState: { opTime: { ts: Timestamp(1547393821, 438), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393821, 864), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.937+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating phil.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:37:01.000+0000 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.937+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating phil.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393821, 863), t: 22 } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.937+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.952+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.952+0000 D ASIO [conn42] Request 103 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393821, 695), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 654), $clusterTime: { clusterTime: Timestamp(1547393821, 878), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.952+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393821, 695), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 654), $clusterTime: { clusterTime: Timestamp(1547393821, 878), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.953+0000 D SHARDING [conn42] Command end db: config msg id: 21 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.953+0000 I COMMAND [conn42] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 36ms Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.953+0000 D SHARDING [conn42] Command begin db: config msg id: 23 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.953+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b1da1824195fadc103b Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.953+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 105 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.953+0000 D ASIO [conn42] startCommand: RemoteCommand 105 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.953+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.953+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.953+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.953+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.975+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.975+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host zeta.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "zeta.node.gce-us-east1.admiral:27017", "phil.node.gce-us-east1.admiral:27017", "bambi.node.gce-us-central1.admiral:27017" ], arbiters: [ "elrond.node.gce-us-west1.admiral:27017", "dale.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1", setVersion: 19, ismaster: false, secondary: true, primary: "phil.node.gce-us-east1.admiral:27017", me: "zeta.node.gce-us-east1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393821, 881), t: 22 }, lastWriteDate: new Date(1547393821000), majorityOpTime: { ts: Timestamp(1547393821, 818), t: 22 }, majorityWriteDate: new Date(1547393821000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393821951), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393821, 881), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393821, 818), $configServerState: { opTime: { ts: Timestamp(1547393818, 615), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393821, 944), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.975+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating zeta.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:37:01.000+0000 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.975+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating zeta.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393821, 881), t: 22 } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.975+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.979+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.979+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host bambi.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "zeta.node.gce-us-east1.admiral:27017", "phil.node.gce-us-east1.admiral:27017", "bambi.node.gce-us-central1.admiral:27017" ], arbiters: [ "elrond.node.gce-us-west1.admiral:27017", "dale.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1", setVersion: 19, ismaster: false, secondary: true, primary: "phil.node.gce-us-east1.admiral:27017", me: "bambi.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393821, 871), t: 22 }, lastWriteDate: new Date(1547393821000), majorityOpTime: { ts: Timestamp(1547393821, 740), t: 22 }, majorityWriteDate: new Date(1547393821000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393821973), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393821, 871), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393821, 740), $configServerState: { opTime: { ts: Timestamp(1547393806, 181), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393821, 893), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.979+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating bambi.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:37:01.000+0000 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.979+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating bambi.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393821, 871), t: 22 } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.979+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_east1 took 81 msec Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.979+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_central1 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.979+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.982+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.982+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host camden.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "percy.node.gce-us-central1.admiral:27017", "umbra.node.gce-us-west1.admiral:27017", "camden.node.gce-us-central1.admiral:27017" ], arbiters: [ "desmond.node.gce-us-east1.admiral:27017", "flint.node.gce-us-west1.admiral:27017" ], setName: "sessions_gce_us_central1", setVersion: 6, ismaster: true, secondary: false, primary: "camden.node.gce-us-central1.admiral:27017", me: "camden.node.gce-us-central1.admiral:27017", electionId: ObjectId('7fffffff0000000000000004'), lastWrite: { opTime: { ts: Timestamp(1547393821, 903), t: 4 }, lastWriteDate: new Date(1547393821000), majorityOpTime: { ts: Timestamp(1547393821, 747), t: 4 }, majorityWriteDate: new Date(1547393821000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393821976), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393821, 903), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000004') }, lastCommittedOpTime: Timestamp(1547393821, 747), $configServerState: { opTime: { ts: Timestamp(1547393821, 654), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393821, 961), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.982+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating camden.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:37:01.000+0000 Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.982+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating camden.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393821, 903), t: 4 } Jan 13 15:37:01 ivy mongos[27723]: 2019-01-13T15:37:01.982+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.022+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.022+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host umbra.node.gce-us-west1.admiral:27017 based on ismaster reply: { hosts: [ "percy.node.gce-us-central1.admiral:27017", "umbra.node.gce-us-west1.admiral:27017", "camden.node.gce-us-central1.admiral:27017" ], arbiters: [ "desmond.node.gce-us-east1.admiral:27017", "flint.node.gce-us-west1.admiral:27017" ], setName: "sessions_gce_us_central1", setVersion: 6, ismaster: false, secondary: true, primary: "camden.node.gce-us-central1.admiral:27017", me: "umbra.node.gce-us-west1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393821, 888), t: 4 }, lastWriteDate: new Date(1547393821000), majorityOpTime: { ts: Timestamp(1547393821, 747), t: 4 }, majorityWriteDate: new Date(1547393821000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393821997), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393821, 888), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393821, 747), $configServerState: { opTime: { ts: Timestamp(1547393799, 68), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393821, 888), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.022+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating umbra.node.gce-us-west1.admiral:27017 lastWriteDate to 2019-01-13T15:37:01.000+0000 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.022+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating umbra.node.gce-us-west1.admiral:27017 opTime to { ts: Timestamp(1547393821, 888), t: 4 } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.022+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.025+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.025+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host percy.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "percy.node.gce-us-central1.admiral:27017", "umbra.node.gce-us-west1.admiral:27017", "camden.node.gce-us-central1.admiral:27017" ], arbiters: [ "desmond.node.gce-us-east1.admiral:27017", "flint.node.gce-us-west1.admiral:27017" ], setName: "sessions_gce_us_central1", setVersion: 6, ismaster: false, secondary: true, primary: "camden.node.gce-us-central1.admiral:27017", me: "percy.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393822, 2), t: 4 }, lastWriteDate: new Date(1547393822000), majorityOpTime: { ts: Timestamp(1547393821, 803), t: 4 }, majorityWriteDate: new Date(1547393821000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393822018), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393822, 2), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393821, 803), $configServerState: { opTime: { ts: Timestamp(1547393820, 71), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393822, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.025+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating percy.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:37:02.000+0000 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.025+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating percy.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393822, 2), t: 4 } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.025+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_central1 took 45 msec Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.025+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_west1 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.025+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.026+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.026+0000 D ASIO [ShardRegistry] Request 105 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393821, 977), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393821, 695), t: 1 }, lastOpVisible: { ts: Timestamp(1547393821, 695), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393821, 654), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 695), $clusterTime: { clusterTime: Timestamp(1547393821, 977), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.026+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393821, 977), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393821, 695), t: 1 }, lastOpVisible: { ts: Timestamp(1547393821, 695), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393821, 654), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 695), $clusterTime: { clusterTime: Timestamp(1547393821, 977), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.026+0000 D SHARDING [conn42] Command end db: config msg id: 23 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.026+0000 I COMMAND [conn42] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 73ms Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.027+0000 D SHARDING [conn42] Command begin db: config msg id: 25 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.027+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b1ea1824195fadc103d Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.027+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 106 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.027+0000 D ASIO [conn42] startCommand: RemoteCommand 106 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.027+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.027+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.027+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.027+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.064+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.064+0000 D ASIO [ShardRegistry] Request 106 finished with response: { cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393821, 977), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393821, 695), t: 1 }, lastOpVisible: { ts: Timestamp(1547393821, 695), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393821, 654), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 695), $clusterTime: { clusterTime: Timestamp(1547393821, 992), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.064+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393821, 977), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393821, 695), t: 1 }, lastOpVisible: { ts: Timestamp(1547393821, 695), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393821, 654), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 695), $clusterTime: { clusterTime: Timestamp(1547393821, 992), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.064+0000 D SHARDING [conn42] Command end db: config msg id: 25 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.064+0000 I COMMAND [conn42] query config.databases command: { aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:270 37ms Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.066+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.066+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host tony.node.gce-us-west1.admiral:27017 based on ismaster reply: { hosts: [ "william.node.gce-us-west1.admiral:27017", "tony.node.gce-us-west1.admiral:27017", "chloe.node.gce-us-central1.admiral:27017" ], arbiters: [ "sarah.node.gce-us-east1.admiral:27017", "gerda.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_west1", setVersion: 82625, ismaster: true, secondary: false, primary: "tony.node.gce-us-west1.admiral:27017", me: "tony.node.gce-us-west1.admiral:27017", electionId: ObjectId('7fffffff000000000000001c'), lastWrite: { opTime: { ts: Timestamp(1547393822, 25), t: 28 }, lastWriteDate: new Date(1547393822000), majorityOpTime: { ts: Timestamp(1547393821, 869), t: 28 }, majorityWriteDate: new Date(1547393821000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393822041), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393822, 25), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff000000000000001c') }, lastCommittedOpTime: Timestamp(1547393821, 869), $configServerState: { opTime: { ts: Timestamp(1547393821, 695), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393822, 25), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.066+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating tony.node.gce-us-west1.admiral:27017 lastWriteDate to 2019-01-13T15:37:02.000+0000 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.066+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating tony.node.gce-us-west1.admiral:27017 opTime to { ts: Timestamp(1547393822, 25), t: 28 } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.066+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.066+0000 D SHARDING [conn42] Command begin db: config msg id: 27 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.066+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 107 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.066+0000 D ASIO [conn42] startCommand: RemoteCommand 107 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.066+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.066+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.066+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.066+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.103+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.103+0000 D ASIO [conn42] Request 107 finished with response: { n: 3, ok: 1.0, operationTime: Timestamp(1547393821, 977), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 695), $clusterTime: { clusterTime: Timestamp(1547393821, 992), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.103+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 3, ok: 1.0, operationTime: Timestamp(1547393821, 977), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 695), $clusterTime: { clusterTime: Timestamp(1547393821, 992), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.103+0000 D SHARDING [conn42] Command end db: config msg id: 27 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.104+0000 I COMMAND [conn42] query config.collections command: { count: "collections", query: { dropped: false }, $db: "config" } numYields:0 reslen:210 37ms Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.104+0000 D SHARDING [conn42] Command begin db: config msg id: 29 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.104+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 108 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393222104) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.104+0000 D ASIO [conn42] startCommand: RemoteCommand 108 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393222104) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.104+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.104+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.104+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.104+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.107+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.107+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host william.node.gce-us-west1.admiral:27017 based on ismaster reply: { hosts: [ "william.node.gce-us-west1.admiral:27017", "tony.node.gce-us-west1.admiral:27017", "chloe.node.gce-us-central1.admiral:27017" ], arbiters: [ "sarah.node.gce-us-east1.admiral:27017", "gerda.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_west1", setVersion: 82625, ismaster: false, secondary: true, primary: "tony.node.gce-us-west1.admiral:27017", me: "william.node.gce-us-west1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393822, 48), t: 28 }, lastWriteDate: new Date(1547393822000), majorityOpTime: { ts: Timestamp(1547393821, 950), t: 28 }, majorityWriteDate: new Date(1547393821000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393822083), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393822, 48), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393821, 950), $configServerState: { opTime: { ts: Timestamp(1547393804, 231), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393822, 52), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.107+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating william.node.gce-us-west1.admiral:27017 lastWriteDate to 2019-01-13T15:37:02.000+0000 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.107+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating william.node.gce-us-west1.admiral:27017 opTime to { ts: Timestamp(1547393822, 48), t: 28 } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.107+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.109+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.110+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host chloe.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "william.node.gce-us-west1.admiral:27017", "tony.node.gce-us-west1.admiral:27017", "chloe.node.gce-us-central1.admiral:27017" ], arbiters: [ "sarah.node.gce-us-east1.admiral:27017", "gerda.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_west1", setVersion: 82625, ismaster: false, secondary: true, primary: "tony.node.gce-us-west1.admiral:27017", me: "chloe.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393822, 17), t: 28 }, lastWriteDate: new Date(1547393822000), majorityOpTime: { ts: Timestamp(1547393821, 950), t: 28 }, majorityWriteDate: new Date(1547393821000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393822104), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393822, 17), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393821, 950), $configServerState: { opTime: { ts: Timestamp(1547393811, 1009), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393822, 52), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.110+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating chloe.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:37:02.000+0000 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.110+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating chloe.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393822, 17), t: 28 } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.110+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_west1 took 84 msec Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.110+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_europe_west1 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.110+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.141+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.141+0000 D ASIO [conn42] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Request 108 finished with response: { cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393815544), up: 3487012, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393813854), up: 3433150, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393813216), up: 3486910, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393817807), up: 756, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393814770), up: 74760, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393821749), up: 74793, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393820773), up: 74766, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393818288), up: 74736, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393814637), up: 74732, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393815328), up: 74705, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.node.gce-us-eas Jan 13 15:37:02 ivy mongos[27723]: t1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393812802), up: 74675, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393817504), up: 74707, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393821681), up: 74684, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393816025), up: 74652, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393819220), up: 74655, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393817436), up: 74599, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393821329), up: 74632, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393815044), up: 74626, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393816292), up: 74597, waiting: true }, { _id: "jacob:27 .......... 75206, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393819593), up: 75170, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393818114), up: 75205, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393819307), up: 75965, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", Jan 13 15:37:02 ivy mongos[27723]: "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393815407), up: 76020, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393814999), up: 76021, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393813840), up: 75959, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393816925), up: 76552, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393816613), up: 76552, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393813253), up: 76488, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393813119), up: 76338, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393813256), up: 76488, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393813121), up: 76276, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393817274), up: 76342, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393813120), up: 76276, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393813121), up: 76151, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393814779), up: 76215, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393814780) Jan 13 15:37:02 ivy mongos[27723]: , up: 76216, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393821525), up: 111, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393813121), up: 76089, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393821886), up: 76159, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393821, 977), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 879), $clusterTime: { clusterTime: Timestamp(1547393822, 44), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.141+0000 D EXECUTOR [conn42] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393815544), up: 3487012, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393813854), up: 3433150, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393813216), up: 3486910, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393817807), up: 756, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393814770), up: 74760, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393821749), up: 74793, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393820773), up: 74766, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393818288), up: 74736, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393814637), up: 74732, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393815328), up: 74705, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.no Jan 13 15:37:02 ivy mongos[27723]: de.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393812802), up: 74675, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393817504), up: 74707, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393821681), up: 74684, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393816025), up: 74652, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393819220), up: 74655, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393817436), up: 74599, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393821329), up: 74632, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393815044), up: 74626, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393816292), up: 74597, waiting: true }, { _ .......... 75206, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393819593), up: 75170, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393818114), up: 75205, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393819307), up: 75965, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", Jan 13 15:37:02 ivy mongos[27723]: "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393815407), up: 76020, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393814999), up: 76021, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393813840), up: 75959, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393816925), up: 76552, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393816613), up: 76552, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393813253), up: 76488, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393813119), up: 76338, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393813256), up: 76488, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393813121), up: 76276, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393817274), up: 76342, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393813120), up: 76276, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393813121), up: 76151, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393814779), up: 76215, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393814780) Jan 13 15:37:02 ivy mongos[27723]: , up: 76216, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393821525), up: 111, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393813121), up: 76089, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393821886), up: 76159, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393821, 977), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 879), $clusterTime: { clusterTime: Timestamp(1547393822, 44), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.142+0000 D SHARDING [conn42] Command end db: config msg id: 29 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.142+0000 I COMMAND [conn42] query config.mongos command: { find: "mongos", filter: { ping: { $gte: new Date(1547393222104) } }, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:63 reslen:9894 37ms Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.143+0000 D SHARDING [conn42] Command begin db: config msg id: 31 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.143+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 109 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.143+0000 D ASIO [conn42] startCommand: RemoteCommand 109 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.143+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.143+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.143+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.143+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.166+0000 D NETWORK [ShardRegistry] Starting client-side compression negotiation Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.166+0000 D NETWORK [ShardRegistry] Offering snappy compressor to server Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.166+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.179+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.179+0000 D ASIO [conn42] Request 109 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393822, 51), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 879), $clusterTime: { clusterTime: Timestamp(1547393822, 68), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.180+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393822, 51), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393821, 879), $clusterTime: { clusterTime: Timestamp(1547393822, 68), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.180+0000 D SHARDING [conn42] Command end db: config msg id: 31 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.180+0000 I COMMAND [conn42] query config.locks command: { find: "locks", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:241 37ms Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.210+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.211+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host vivi.node.gce-europe-west1.admiral:27017 based on ismaster reply: { hosts: [ "vivi.node.gce-europe-west1.admiral:27017", "hilda.node.gce-europe-west2.admiral:27017" ], arbiters: [ "hubert.node.gce-europe-west3.admiral:27017" ], setName: "sessions_gce_europe_west1", setVersion: 4, ismaster: true, secondary: false, primary: "vivi.node.gce-europe-west1.admiral:27017", me: "vivi.node.gce-europe-west1.admiral:27017", electionId: ObjectId('7fffffff0000000000000009'), lastWrite: { opTime: { ts: Timestamp(1547393822, 240), t: 9 }, lastWriteDate: new Date(1547393822000), majorityOpTime: { ts: Timestamp(1547393822, 133), t: 9 }, majorityWriteDate: new Date(1547393822000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393822155), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393822, 240), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000009') }, lastCommittedOpTime: Timestamp(1547393822, 133), $configServerState: { opTime: { ts: Timestamp(1547393821, 695), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393822, 240), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.211+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating vivi.node.gce-europe-west1.admiral:27017 lastWriteDate to 2019-01-13T15:37:02.000+0000 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.211+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating vivi.node.gce-europe-west1.admiral:27017 opTime to { ts: Timestamp(1547393822, 240), t: 9 } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.211+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.271+0000 D NETWORK [ShardRegistry] Finishing client-side compression negotiation Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.271+0000 D NETWORK [ShardRegistry] Received message compressors from server Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.271+0000 D NETWORK [ShardRegistry] Adding compressor snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.271+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.272+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.272+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.272+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.307+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.307+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host hilda.node.gce-europe-west2.admiral:27017 based on ismaster reply: { hosts: [ "vivi.node.gce-europe-west1.admiral:27017", "hilda.node.gce-europe-west2.admiral:27017" ], arbiters: [ "hubert.node.gce-europe-west3.admiral:27017" ], setName: "sessions_gce_europe_west1", setVersion: 4, ismaster: false, secondary: true, primary: "vivi.node.gce-europe-west1.admiral:27017", me: "hilda.node.gce-europe-west2.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393822, 282), t: 9 }, lastWriteDate: new Date(1547393822000), majorityOpTime: { ts: Timestamp(1547393822, 267), t: 9 }, majorityWriteDate: new Date(1547393822000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393822255), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393822, 282), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000008') }, lastCommittedOpTime: Timestamp(1547393822, 267), $configServerState: { opTime: { ts: Timestamp(1547393817, 844), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393822, 287), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.307+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating hilda.node.gce-europe-west2.admiral:27017 lastWriteDate to 2019-01-13T15:37:02.000+0000 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.307+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating hilda.node.gce-europe-west2.admiral:27017 opTime to { ts: Timestamp(1547393822, 282), t: 9 } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.307+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_europe_west1 took 197 msec Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.307+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_europe_west2 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.307+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.377+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.377+0000 D ASIO [ShardRegistry] Request 104 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393822, 51), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393821, 977), t: 1 }, lastOpVisible: { ts: Timestamp(1547393821, 977), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 2 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393821, 977), $clusterTime: { clusterTime: Timestamp(1547393822, 264), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.377+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393822, 51), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393821, 977), t: 1 }, lastOpVisible: { ts: Timestamp(1547393821, 977), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 2 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393821, 977), $clusterTime: { clusterTime: Timestamp(1547393822, 264), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.377+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.377+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 110 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:37:32.377+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393821, 977), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.377+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 110 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:37:32.377+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393821, 977), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.377+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.378+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.378+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.378+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.403+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.403+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host ignis.node.gce-europe-west2.admiral:27017 based on ismaster reply: { hosts: [ "ignis.node.gce-europe-west2.admiral:27017", "keith.node.gce-europe-west3.admiral:27017" ], arbiters: [ "francis.node.gce-europe-west1.admiral:27017" ], setName: "sessions_gce_europe_west2", setVersion: 6, ismaster: true, secondary: false, primary: "ignis.node.gce-europe-west2.admiral:27017", me: "ignis.node.gce-europe-west2.admiral:27017", electionId: ObjectId('7fffffff0000000000000004'), lastWrite: { opTime: { ts: Timestamp(1547393822, 302), t: 4 }, lastWriteDate: new Date(1547393822000), majorityOpTime: { ts: Timestamp(1547393822, 294), t: 4 }, majorityWriteDate: new Date(1547393822000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393822350), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393822, 302), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000004') }, lastCommittedOpTime: Timestamp(1547393822, 294), $configServerState: { opTime: { ts: Timestamp(1547393821, 879), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393822, 302), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.403+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ignis.node.gce-europe-west2.admiral:27017 lastWriteDate to 2019-01-13T15:37:02.000+0000 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.404+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ignis.node.gce-europe-west2.admiral:27017 opTime to { ts: Timestamp(1547393822, 302), t: 4 } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.404+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.414+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.414+0000 D ASIO [ShardRegistry] Request 110 finished with response: { cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393822, 51), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393822, 51), t: 1 }, lastOpVisible: { ts: Timestamp(1547393822, 51), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393821, 654), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393822, 51), $clusterTime: { clusterTime: Timestamp(1547393822, 287), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.414+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393822, 51), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393822, 51), t: 1 }, lastOpVisible: { ts: Timestamp(1547393822, 51), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393821, 654), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393822, 51), $clusterTime: { clusterTime: Timestamp(1547393822, 287), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.415+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.415+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 111 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:37:32.415+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393822, 51), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.415+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 111 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:37:32.415+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393822, 51), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.415+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.415+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.415+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.415+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.452+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.452+0000 D ASIO [ShardRegistry] Request 111 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393822, 288), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393822, 51), t: 1 }, lastOpVisible: { ts: Timestamp(1547393822, 51), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393822, 51), $clusterTime: { clusterTime: Timestamp(1547393822, 288), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.452+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393822, 288), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393822, 51), t: 1 }, lastOpVisible: { ts: Timestamp(1547393822, 51), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393822, 51), $clusterTime: { clusterTime: Timestamp(1547393822, 288), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.452+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.511+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.511+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host keith.node.gce-europe-west3.admiral:27017 based on ismaster reply: { hosts: [ "ignis.node.gce-europe-west2.admiral:27017", "keith.node.gce-europe-west3.admiral:27017" ], arbiters: [ "francis.node.gce-europe-west1.admiral:27017" ], setName: "sessions_gce_europe_west2", setVersion: 6, ismaster: false, secondary: true, primary: "ignis.node.gce-europe-west2.admiral:27017", me: "keith.node.gce-europe-west3.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393822, 472), t: 4 }, lastWriteDate: new Date(1547393822000), majorityOpTime: { ts: Timestamp(1547393822, 472), t: 4 }, majorityWriteDate: new Date(1547393822000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393822453), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393822, 472), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393822, 472), $configServerState: { opTime: { ts: Timestamp(1547393821, 977), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393822, 524), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.511+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating keith.node.gce-europe-west3.admiral:27017 lastWriteDate to 2019-01-13T15:37:02.000+0000 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.511+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating keith.node.gce-europe-west3.admiral:27017 opTime to { ts: Timestamp(1547393822, 472), t: 4 } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.511+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_europe_west2 took 204 msec Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.511+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_europe_west3 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.511+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.618+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.618+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host albert.node.gce-europe-west3.admiral:27017 based on ismaster reply: { hosts: [ "albert.node.gce-europe-west3.admiral:27017", "jordan.node.gce-europe-west1.admiral:27017" ], arbiters: [ "garry.node.gce-europe-west2.admiral:27017" ], setName: "sessions_gce_europe_west3", setVersion: 6, ismaster: true, secondary: false, primary: "albert.node.gce-europe-west3.admiral:27017", me: "albert.node.gce-europe-west3.admiral:27017", electionId: ObjectId('7fffffff000000000000000a'), lastWrite: { opTime: { ts: Timestamp(1547393822, 591), t: 10 }, lastWriteDate: new Date(1547393822000), majorityOpTime: { ts: Timestamp(1547393822, 578), t: 10 }, majorityWriteDate: new Date(1547393822000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393822560), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393822, 591), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff000000000000000a') }, lastCommittedOpTime: Timestamp(1547393822, 578), $configServerState: { opTime: { ts: Timestamp(1547393822, 51), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393822, 591), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.618+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating albert.node.gce-europe-west3.admiral:27017 lastWriteDate to 2019-01-13T15:37:02.000+0000 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.618+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating albert.node.gce-europe-west3.admiral:27017 opTime to { ts: Timestamp(1547393822, 591), t: 10 } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.618+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.719+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.719+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host jordan.node.gce-europe-west1.admiral:27017 based on ismaster reply: { hosts: [ "albert.node.gce-europe-west3.admiral:27017", "jordan.node.gce-europe-west1.admiral:27017" ], arbiters: [ "garry.node.gce-europe-west2.admiral:27017" ], setName: "sessions_gce_europe_west3", setVersion: 6, ismaster: false, secondary: true, primary: "albert.node.gce-europe-west3.admiral:27017", me: "jordan.node.gce-europe-west1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393822, 702), t: 10 }, lastWriteDate: new Date(1547393822000), majorityOpTime: { ts: Timestamp(1547393822, 614), t: 10 }, majorityWriteDate: new Date(1547393822000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393822664), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393822, 702), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000009') }, lastCommittedOpTime: Timestamp(1547393822, 614), $configServerState: { opTime: { ts: Timestamp(1547393797, 928), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393822, 704), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.719+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jordan.node.gce-europe-west1.admiral:27017 lastWriteDate to 2019-01-13T15:37:02.000+0000 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.719+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jordan.node.gce-europe-west1.admiral:27017 opTime to { ts: Timestamp(1547393822, 702), t: 10 } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.719+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_europe_west3 took 207 msec Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.719+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_east1_2 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.719+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.757+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.757+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host queen.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: true, secondary: false, primary: "queen.node.gce-us-east1.admiral:27017", me: "queen.node.gce-us-east1.admiral:27017", electionId: ObjectId('7fffffff0000000000000003'), lastWrite: { opTime: { ts: Timestamp(1547393822, 724), t: 3 }, lastWriteDate: new Date(1547393822000), majorityOpTime: { ts: Timestamp(1547393822, 660), t: 3 }, majorityWriteDate: new Date(1547393822000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393822736), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393822, 724), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000003') }, lastCommittedOpTime: Timestamp(1547393822, 660), $configServerState: { opTime: { ts: Timestamp(1547393822, 288), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393822, 724), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.757+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating queen.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:37:02.000+0000 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.757+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating queen.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393822, 724), t: 3 } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.757+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.760+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.760+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host ralph.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: false, secondary: true, primary: "queen.node.gce-us-east1.admiral:27017", me: "ralph.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393822, 699), t: 3 }, lastWriteDate: new Date(1547393822000), majorityOpTime: { ts: Timestamp(1547393822, 626), t: 3 }, majorityWriteDate: new Date(1547393822000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393822754), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393822, 699), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393822, 626), $configServerState: { opTime: { ts: Timestamp(1547393810, 238), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393822, 713), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.760+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ralph.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:37:02.000+0000 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.760+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ralph.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393822, 699), t: 3 } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.760+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.798+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.798+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host april.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: false, secondary: true, primary: "queen.node.gce-us-east1.admiral:27017", me: "april.node.gce-us-east1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393822, 735), t: 3 }, lastWriteDate: new Date(1547393822000), majorityOpTime: { ts: Timestamp(1547393822, 660), t: 3 }, majorityWriteDate: new Date(1547393822000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393822774), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393822, 735), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393822, 660), $configServerState: { opTime: { ts: Timestamp(1547393820, 741), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393822, 735), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.798+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating april.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:37:02.000+0000 Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.798+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating april.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393822, 735), t: 3 } Jan 13 15:37:02 ivy mongos[27723]: 2019-01-13T15:37:02.798+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_east1_2 took 78 msec Jan 13 15:37:06 ivy mongos[27723]: 2019-01-13T15:37:06.827+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:06 ivy mongos[27723]: 2019-01-13T15:37:06.866+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:06 ivy mongos[27723]: 2019-01-13T15:37:06.866+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:07 ivy mongos[27723]: 2019-01-13T15:37:07.763+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:07 ivy mongos[27723]: 2019-01-13T15:37:07.803+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:07 ivy mongos[27723]: 2019-01-13T15:37:07.803+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:10 ivy mongos[27723]: 2019-01-13T15:37:10.167+0000 D COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms Jan 13 15:37:10 ivy mongos[27723]: 2019-01-13T15:37:10.167+0000 D COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms Jan 13 15:37:10 ivy mongos[27723]: 2019-01-13T15:37:10.167+0000 D - [PeriodicTaskRunner] cleaning up unused lock buckets of the global lock manager Jan 13 15:37:10 ivy mongos[27723]: 2019-01-13T15:37:10.167+0000 D COMMAND [PeriodicTaskRunner] task: UnusedLockCleaner took: 0ms Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.452+0000 D TRACKING [Uptime reporter] Cmd: NotSet, TrackingId: 5c3b5b28a1824195fadc1044 Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.453+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 114 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:37:42.453+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393832452), up: 122, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.453+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 114 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:37:42.453+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393832452), up: 122, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.453+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.453+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.453+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.453+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.605+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.605+0000 D ASIO [ShardRegistry] Request 114 finished with response: { n: 1, nModified: 1, opTime: { ts: Timestamp(1547393832, 451), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393832, 451), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393832, 451), t: 1 }, lastOpVisible: { ts: Timestamp(1547393832, 451), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393832, 451), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393832, 451), $clusterTime: { clusterTime: Timestamp(1547393832, 645), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.605+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ n: 1, nModified: 1, opTime: { ts: Timestamp(1547393832, 451), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393832, 451), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393832, 451), t: 1 }, lastOpVisible: { ts: Timestamp(1547393832, 451), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393832, 451), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393832, 451), $clusterTime: { clusterTime: Timestamp(1547393832, 645), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.605+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 115 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:37:42.605+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393832, 451), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.605+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 115 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:37:42.605+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393832, 451), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.605+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.605+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.605+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.605+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.605+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.682+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.682+0000 D ASIO [ShardRegistry] Request 115 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393832, 554), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393832, 451), t: 1 }, lastOpVisible: { ts: Timestamp(1547393832, 451), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393832, 451), $clusterTime: { clusterTime: Timestamp(1547393832, 692), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.682+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393832, 554), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393832, 451), t: 1 }, lastOpVisible: { ts: Timestamp(1547393832, 451), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393832, 451), $clusterTime: { clusterTime: Timestamp(1547393832, 692), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.682+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 116 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:37:42.682+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393832, 451), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.682+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 116 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:37:42.682+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393832, 451), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.682+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.682+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.682+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.682+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.682+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.722+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.722+0000 D ASIO [ShardRegistry] Request 116 finished with response: { cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393832, 554), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393832, 451), t: 1 }, lastOpVisible: { ts: Timestamp(1547393832, 451), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393832, 451), $clusterTime: { clusterTime: Timestamp(1547393832, 692), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.722+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393832, 554), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393832, 451), t: 1 }, lastOpVisible: { ts: Timestamp(1547393832, 451), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393832, 451), $clusterTime: { clusterTime: Timestamp(1547393832, 692), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.722+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 117 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:37:42.722+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393832, 451), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.722+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 117 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:37:42.722+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393832, 451), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.723+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.723+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.723+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.723+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.723+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.759+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.759+0000 D ASIO [ShardRegistry] Request 117 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393832, 554), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393832, 554), t: 1 }, lastOpVisible: { ts: Timestamp(1547393832, 554), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393832, 451), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393832, 554), $clusterTime: { clusterTime: Timestamp(1547393832, 751), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.759+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393832, 554), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393832, 554), t: 1 }, lastOpVisible: { ts: Timestamp(1547393832, 554), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393832, 451), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393832, 554), $clusterTime: { clusterTime: Timestamp(1547393832, 751), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:12 ivy mongos[27723]: 2019-01-13T15:37:12.760+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:13 ivy mongos[27723]: 2019-01-13T15:37:13.710+0000 D SHARDING [conn42] Command begin db: admin msg id: 33 Jan 13 15:37:13 ivy mongos[27723]: 2019-01-13T15:37:13.710+0000 D SHARDING [conn42] Command end db: admin msg id: 33 Jan 13 15:37:13 ivy mongos[27723]: 2019-01-13T15:37:13.710+0000 I COMMAND [conn42] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:178 protocol:op_query 0ms Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.223+0000 D SHARDING [conn42] Command begin db: admin msg id: 35 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.223+0000 D SHARDING [conn42] Command end db: admin msg id: 35 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.223+0000 I COMMAND [conn42] query admin.1 command: { buildInfo: "1", $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:1340 0ms Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.225+0000 D SHARDING [conn42] Command begin db: admin msg id: 37 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.225+0000 D NETWORK [conn42] Starting server-side compression negotiation Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.225+0000 D NETWORK [conn42] Compression negotiation not requested by client Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.225+0000 D SHARDING [conn42] Command end db: admin msg id: 37 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.225+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.229+0000 D SHARDING [conn42] Command begin db: admin msg id: 39 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.229+0000 D SHARDING [conn42] Command end db: admin msg id: 39 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.229+0000 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $db: "admin" } numYields:0 reslen:10255 protocol:op_query 0ms Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.230+0000 D SHARDING [conn42] Command begin db: config msg id: 41 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.230+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 118 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.230+0000 D ASIO [conn42] startCommand: RemoteCommand 118 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.230+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.230+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.230+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.230+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.268+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.268+0000 D ASIO [conn42] Request 118 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393834, 210), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393834, 9), $clusterTime: { clusterTime: Timestamp(1547393834, 210), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.268+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393834, 210), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393834, 9), $clusterTime: { clusterTime: Timestamp(1547393834, 210), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.268+0000 D SHARDING [conn42] Command end db: config msg id: 41 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.268+0000 I COMMAND [conn42] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 38ms Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.269+0000 D SHARDING [conn42] Command begin db: config msg id: 43 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.269+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b2aa1824195fadc104e Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.269+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 119 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.269+0000 D ASIO [conn42] startCommand: RemoteCommand 119 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.269+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.269+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.269+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.269+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.337+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.337+0000 D ASIO [ShardRegistry] Request 119 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393834, 332), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393834, 9), t: 1 }, lastOpVisible: { ts: Timestamp(1547393834, 9), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393832, 451), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393834, 9), $clusterTime: { clusterTime: Timestamp(1547393834, 332), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.337+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393834, 332), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393834, 9), t: 1 }, lastOpVisible: { ts: Timestamp(1547393834, 9), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393832, 451), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393834, 9), $clusterTime: { clusterTime: Timestamp(1547393834, 332), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.337+0000 D SHARDING [conn42] Command end db: config msg id: 43 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.337+0000 I COMMAND [conn42] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 68ms Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.338+0000 D SHARDING [conn42] Command begin db: config msg id: 45 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.338+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 120 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.338+0000 D ASIO [conn42] startCommand: RemoteCommand 120 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.338+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.338+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.338+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.338+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.375+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.375+0000 D ASIO [conn42] Request 120 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393834, 332), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393834, 9), $clusterTime: { clusterTime: Timestamp(1547393834, 388), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.375+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393834, 332), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393834, 9), $clusterTime: { clusterTime: Timestamp(1547393834, 388), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.375+0000 D SHARDING [conn42] Command end db: config msg id: 45 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.375+0000 I COMMAND [conn42] query config.settings command: { find: "settings", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:315 37ms Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.375+0000 D SHARDING [conn42] Command begin db: config msg id: 47 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.375+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b2aa1824195fadc1051 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.375+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 121 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393234375) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.375+0000 D ASIO [conn42] startCommand: RemoteCommand 121 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393234375) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.376+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.376+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.376+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.376+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.427+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.427+0000 D ASIO [ShardRegistry] Request 121 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393834, 332), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393834, 210), t: 1 }, lastOpVisible: { ts: Timestamp(1547393834, 210), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393832, 451), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393834, 210), $clusterTime: { clusterTime: Timestamp(1547393834, 388), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.427+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393834, 332), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393834, 210), t: 1 }, lastOpVisible: { ts: Timestamp(1547393834, 210), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393832, 451), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393834, 210), $clusterTime: { clusterTime: Timestamp(1547393834, 388), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.427+0000 D SHARDING [conn42] Command end db: config msg id: 47 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.427+0000 I COMMAND [conn42] query config.changelog command: { aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393234375) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:245 51ms Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.428+0000 D SHARDING [conn42] Command begin db: config msg id: 49 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.428+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 122 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.428+0000 D ASIO [conn42] startCommand: RemoteCommand 122 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.428+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.428+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.428+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.428+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.464+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.464+0000 D ASIO [conn42] Request 122 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393834, 332), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393834, 210), $clusterTime: { clusterTime: Timestamp(1547393834, 421), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.464+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393834, 332), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393834, 210), $clusterTime: { clusterTime: Timestamp(1547393834, 421), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.465+0000 D SHARDING [conn42] Command end db: config msg id: 49 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.465+0000 I COMMAND [conn42] query config.shards command: { find: "shards", filter: {}, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:1834 37ms Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.465+0000 D SHARDING [conn42] Command begin db: config msg id: 51 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.465+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 123 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.465+0000 D ASIO [conn42] startCommand: RemoteCommand 123 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.465+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.465+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.465+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.465+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.501+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.502+0000 D ASIO [conn42] Request 123 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393834, 332), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393834, 332), $clusterTime: { clusterTime: Timestamp(1547393834, 524), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.502+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393834, 332), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393834, 332), $clusterTime: { clusterTime: Timestamp(1547393834, 524), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.502+0000 D SHARDING [conn42] Command end db: config msg id: 51 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.502+0000 I COMMAND [conn42] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 36ms Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.502+0000 D SHARDING [conn42] Command begin db: config msg id: 53 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.502+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b2aa1824195fadc1055 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.502+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 124 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.502+0000 D ASIO [conn42] startCommand: RemoteCommand 124 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.502+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.502+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.502+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.502+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.589+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.589+0000 D ASIO [ShardRegistry] Request 124 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393834, 561), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393834, 332), t: 1 }, lastOpVisible: { ts: Timestamp(1547393834, 332), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393832, 451), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393834, 332), $clusterTime: { clusterTime: Timestamp(1547393834, 580), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.589+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393834, 561), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393834, 332), t: 1 }, lastOpVisible: { ts: Timestamp(1547393834, 332), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393832, 451), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393834, 332), $clusterTime: { clusterTime: Timestamp(1547393834, 580), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.589+0000 D SHARDING [conn42] Command end db: config msg id: 53 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.589+0000 I COMMAND [conn42] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 87ms Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.589+0000 D SHARDING [conn42] Command begin db: config msg id: 55 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.590+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b2aa1824195fadc1057 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.590+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 125 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.590+0000 D ASIO [conn42] startCommand: RemoteCommand 125 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.590+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.590+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.590+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.590+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.626+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.627+0000 D ASIO [ShardRegistry] Request 125 finished with response: { cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393834, 561), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393834, 332), t: 1 }, lastOpVisible: { ts: Timestamp(1547393834, 332), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393832, 451), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393834, 332), $clusterTime: { clusterTime: Timestamp(1547393834, 594), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.627+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393834, 561), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393834, 332), t: 1 }, lastOpVisible: { ts: Timestamp(1547393834, 332), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393832, 451), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393834, 332), $clusterTime: { clusterTime: Timestamp(1547393834, 594), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.627+0000 D SHARDING [conn42] Command end db: config msg id: 55 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.627+0000 I COMMAND [conn42] query config.databases command: { aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:270 37ms Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.627+0000 D SHARDING [conn42] Command begin db: config msg id: 57 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.627+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 126 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.627+0000 D ASIO [conn42] startCommand: RemoteCommand 126 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.627+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.627+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.627+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.627+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.664+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.664+0000 D ASIO [conn42] Request 126 finished with response: { n: 3, ok: 1.0, operationTime: Timestamp(1547393834, 663), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393834, 332), $clusterTime: { clusterTime: Timestamp(1547393834, 663), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.664+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 3, ok: 1.0, operationTime: Timestamp(1547393834, 663), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393834, 332), $clusterTime: { clusterTime: Timestamp(1547393834, 663), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.664+0000 D SHARDING [conn42] Command end db: config msg id: 57 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.664+0000 I COMMAND [conn42] query config.collections command: { count: "collections", query: { dropped: false }, $db: "config" } numYields:0 reslen:210 37ms Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.665+0000 D SHARDING [conn42] Command begin db: config msg id: 59 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.665+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 127 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393234665) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.665+0000 D ASIO [conn42] startCommand: RemoteCommand 127 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393234665) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.665+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.665+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.665+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.665+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.702+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.702+0000 D ASIO [conn42] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Request 127 finished with response: { cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393825871), up: 3487022, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393824249), up: 3433160, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393834209), up: 3486931, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393828031), up: 767, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393824883), up: 74771, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393831967), up: 74803, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393831102), up: 74776, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393828511), up: 74746, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393824820), up: 74743, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393825493), up: 74715, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.node.gce-us-eas Jan 13 15:37:14 ivy mongos[27723]: t1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393833127), up: 74695, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393827828), up: 74717, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393831905), up: 74694, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393826243), up: 74662, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393829433), up: 74665, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393827604), up: 74609, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393831437), up: 74642, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393825288), up: 74636, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393826447), up: 74608, waiting: true }, { _id: "jacob:27 .......... 5216, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393829811), up: 75180, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393828340), up: 75216, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393829586), up: 75975, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", Jan 13 15:37:14 ivy mongos[27723]: "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393825862), up: 76031, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393825373), up: 76032, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393824233), up: 75970, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393827315), up: 76562, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393826918), up: 76562, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393833957), up: 76509, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393833920), up: 76359, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393833954), up: 76509, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393833920), up: 76297, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393827581), up: 76353, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393833921), up: 76297, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393833920), up: 76171, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393825154), up: 76225, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393825155), Jan 13 15:37:14 ivy mongos[27723]: up: 76226, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393832452), up: 122, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393833919), up: 76110, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393832251), up: 76170, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393834, 663), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393834, 525), $clusterTime: { clusterTime: Timestamp(1547393834, 798), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.702+0000 D EXECUTOR [conn42] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393825871), up: 3487022, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393824249), up: 3433160, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393834209), up: 3486931, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393828031), up: 767, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393824883), up: 74771, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393831967), up: 74803, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393831102), up: 74776, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393828511), up: 74746, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393824820), up: 74743, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393825493), up: 74715, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.no Jan 13 15:37:14 ivy mongos[27723]: de.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393833127), up: 74695, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393827828), up: 74717, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393831905), up: 74694, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393826243), up: 74662, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393829433), up: 74665, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393827604), up: 74609, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393831437), up: 74642, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393825288), up: 74636, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393826447), up: 74608, waiting: true }, { _ .......... 5216, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393829811), up: 75180, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393828340), up: 75216, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393829586), up: 75975, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", Jan 13 15:37:14 ivy mongos[27723]: "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393825862), up: 76031, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393825373), up: 76032, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393824233), up: 75970, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393827315), up: 76562, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393826918), up: 76562, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393833957), up: 76509, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393833920), up: 76359, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393833954), up: 76509, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393833920), up: 76297, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393827581), up: 76353, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393833921), up: 76297, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393833920), up: 76171, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393825154), up: 76225, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393825155), Jan 13 15:37:14 ivy mongos[27723]: up: 76226, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393832452), up: 122, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393833919), up: 76110, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393832251), up: 76170, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393834, 663), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393834, 525), $clusterTime: { clusterTime: Timestamp(1547393834, 798), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.703+0000 D SHARDING [conn42] Command end db: config msg id: 59 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.703+0000 I COMMAND [conn42] query config.mongos command: { find: "mongos", filter: { ping: { $gte: new Date(1547393234665) } }, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:63 reslen:9894 37ms Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.705+0000 D SHARDING [conn42] Command begin db: config msg id: 61 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.705+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 128 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.705+0000 D ASIO [conn42] startCommand: RemoteCommand 128 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.705+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.705+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.705+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.705+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.741+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.741+0000 D ASIO [conn42] Request 128 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393834, 800), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393834, 525), $clusterTime: { clusterTime: Timestamp(1547393834, 825), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.741+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393834, 800), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393834, 525), $clusterTime: { clusterTime: Timestamp(1547393834, 825), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.741+0000 D SHARDING [conn42] Command end db: config msg id: 61 Jan 13 15:37:14 ivy mongos[27723]: 2019-01-13T15:37:14.741+0000 I COMMAND [conn42] query config.locks command: { find: "locks", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:241 36ms Jan 13 15:37:19 ivy mongos[27723]: 2019-01-13T15:37:19.420+0000 D NETWORK [TaskExecutorPool-0] Compressing message with snappy Jan 13 15:37:19 ivy mongos[27723]: 2019-01-13T15:37:19.458+0000 D NETWORK [TaskExecutorPool-0] Decompressing message with snappy Jan 13 15:37:19 ivy mongos[27723]: 2019-01-13T15:37:19.458+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:22 ivy mongos[27723]: 2019-01-13T15:37:22.760+0000 D TRACKING [Uptime reporter] Cmd: NotSet, TrackingId: 5c3b5b32a1824195fadc105c Jan 13 15:37:22 ivy mongos[27723]: 2019-01-13T15:37:22.760+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 130 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:37:52.760+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393842760), up: 132, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:37:22 ivy mongos[27723]: 2019-01-13T15:37:22.760+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 130 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:37:52.760+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393842760), up: 132, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:37:22 ivy mongos[27723]: 2019-01-13T15:37:22.760+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:22 ivy mongos[27723]: 2019-01-13T15:37:22.760+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:22 ivy mongos[27723]: 2019-01-13T15:37:22.760+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:22 ivy mongos[27723]: 2019-01-13T15:37:22.760+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:22.999+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:22.999+0000 D ASIO [ShardRegistry] Request 130 finished with response: { n: 1, nModified: 1, opTime: { ts: Timestamp(1547393842, 676), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393842, 676), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393842, 676), t: 1 }, lastOpVisible: { ts: Timestamp(1547393842, 676), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393842, 676), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393842, 676), $clusterTime: { clusterTime: Timestamp(1547393842, 911), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:22.999+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ n: 1, nModified: 1, opTime: { ts: Timestamp(1547393842, 676), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393842, 676), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393842, 676), t: 1 }, lastOpVisible: { ts: Timestamp(1547393842, 676), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393842, 676), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393842, 676), $clusterTime: { clusterTime: Timestamp(1547393842, 911), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:22.999+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:22.999+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 131 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:37:52.999+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393842, 676), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:22.999+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 131 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:37:52.999+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393842, 676), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.000+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.000+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.000+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.000+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.039+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.040+0000 D ASIO [ShardRegistry] Request 131 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393842, 676), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393842, 676), t: 1 }, lastOpVisible: { ts: Timestamp(1547393842, 676), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393842, 676), $clusterTime: { clusterTime: Timestamp(1547393842, 911), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.040+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393842, 676), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393842, 676), t: 1 }, lastOpVisible: { ts: Timestamp(1547393842, 676), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393842, 676), $clusterTime: { clusterTime: Timestamp(1547393842, 911), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.040+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.040+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 132 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:37:53.040+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393842, 676), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.040+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 132 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:37:53.040+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393842, 676), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.040+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.040+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.040+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.040+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.078+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.078+0000 D ASIO [ShardRegistry] Request 132 finished with response: { cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393842, 676), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393842, 676), t: 1 }, lastOpVisible: { ts: Timestamp(1547393842, 676), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393842, 676), $clusterTime: { clusterTime: Timestamp(1547393842, 934), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.078+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393842, 676), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393842, 676), t: 1 }, lastOpVisible: { ts: Timestamp(1547393842, 676), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393842, 676), $clusterTime: { clusterTime: Timestamp(1547393842, 934), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.078+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.078+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 133 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:37:53.078+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393842, 676), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.078+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 133 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:37:53.078+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393842, 676), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.078+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.078+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.078+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.078+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.115+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.115+0000 D ASIO [ShardRegistry] Request 133 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393842, 676), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393842, 676), t: 1 }, lastOpVisible: { ts: Timestamp(1547393842, 676), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393842, 676), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393842, 676), $clusterTime: { clusterTime: Timestamp(1547393843, 40), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.115+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393842, 676), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393842, 676), t: 1 }, lastOpVisible: { ts: Timestamp(1547393842, 676), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393842, 676), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393842, 676), $clusterTime: { clusterTime: Timestamp(1547393843, 40), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:23 ivy mongos[27723]: 2019-01-13T15:37:23.115+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:28 ivy mongos[27723]: 2019-01-13T15:37:28.713+0000 D SHARDING [conn42] Command begin db: admin msg id: 63 Jan 13 15:37:28 ivy mongos[27723]: 2019-01-13T15:37:28.713+0000 D SHARDING [conn42] Command end db: admin msg id: 63 Jan 13 15:37:28 ivy mongos[27723]: 2019-01-13T15:37:28.713+0000 I COMMAND [conn42] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:178 protocol:op_query 0ms Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.223+0000 D SHARDING [conn42] Command begin db: admin msg id: 65 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.223+0000 D SHARDING [conn42] Command end db: admin msg id: 65 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.223+0000 I COMMAND [conn42] query admin.1 command: { buildInfo: "1", $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:1340 0ms Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.224+0000 D SHARDING [conn42] Command begin db: admin msg id: 67 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.224+0000 D NETWORK [conn42] Starting server-side compression negotiation Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.224+0000 D NETWORK [conn42] Compression negotiation not requested by client Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.225+0000 D SHARDING [conn42] Command end db: admin msg id: 67 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.225+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.227+0000 D SHARDING [conn42] Command begin db: admin msg id: 69 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.227+0000 D SHARDING [conn42] Command end db: admin msg id: 69 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.227+0000 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $db: "admin" } numYields:0 reslen:10255 protocol:op_query 0ms Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.229+0000 D SHARDING [conn42] Command begin db: config msg id: 71 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.229+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 134 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.229+0000 D ASIO [conn42] startCommand: RemoteCommand 134 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.229+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.229+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.229+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.229+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.266+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.266+0000 D ASIO [conn42] Request 134 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393848, 896), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393848, 896), $clusterTime: { clusterTime: Timestamp(1547393849, 135), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.267+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393848, 896), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393848, 896), $clusterTime: { clusterTime: Timestamp(1547393849, 135), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.267+0000 D SHARDING [conn42] Command end db: config msg id: 71 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.267+0000 I COMMAND [conn42] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 38ms Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.268+0000 D SHARDING [conn42] Command begin db: config msg id: 73 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.268+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b39a1824195fadc1066 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.268+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 135 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.268+0000 D ASIO [conn42] startCommand: RemoteCommand 135 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.268+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.268+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.268+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.268+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.334+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.335+0000 D ASIO [ShardRegistry] Request 135 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393849, 201), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393848, 896), t: 1 }, lastOpVisible: { ts: Timestamp(1547393848, 896), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393842, 676), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393848, 896), $clusterTime: { clusterTime: Timestamp(1547393849, 201), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.335+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393849, 201), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393848, 896), t: 1 }, lastOpVisible: { ts: Timestamp(1547393848, 896), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393842, 676), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393848, 896), $clusterTime: { clusterTime: Timestamp(1547393849, 201), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.335+0000 D SHARDING [conn42] Command end db: config msg id: 73 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.335+0000 I COMMAND [conn42] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 66ms Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.335+0000 D SHARDING [conn42] Command begin db: config msg id: 75 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.335+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 136 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.335+0000 D ASIO [conn42] startCommand: RemoteCommand 136 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.335+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.335+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.335+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.335+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.372+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.372+0000 D ASIO [conn42] Request 136 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393849, 201), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393848, 896), $clusterTime: { clusterTime: Timestamp(1547393849, 296), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.372+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393849, 201), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393848, 896), $clusterTime: { clusterTime: Timestamp(1547393849, 296), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.372+0000 D SHARDING [conn42] Command end db: config msg id: 75 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.372+0000 I COMMAND [conn42] query config.settings command: { find: "settings", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:315 36ms Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.372+0000 D SHARDING [conn42] Command begin db: config msg id: 77 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.373+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b39a1824195fadc1069 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.373+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 137 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393249372) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.373+0000 D ASIO [conn42] startCommand: RemoteCommand 137 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393249372) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.373+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.373+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.373+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.373+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.433+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.433+0000 D ASIO [ShardRegistry] Request 137 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393849, 316), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393848, 896), t: 1 }, lastOpVisible: { ts: Timestamp(1547393848, 896), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393842, 676), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393848, 896), $clusterTime: { clusterTime: Timestamp(1547393849, 316), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.433+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393849, 316), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393848, 896), t: 1 }, lastOpVisible: { ts: Timestamp(1547393848, 896), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393842, 676), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393848, 896), $clusterTime: { clusterTime: Timestamp(1547393849, 316), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.433+0000 D SHARDING [conn42] Command end db: config msg id: 77 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.433+0000 I COMMAND [conn42] query config.changelog command: { aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393249372) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:245 60ms Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.433+0000 D SHARDING [conn42] Command begin db: config msg id: 79 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.433+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 138 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.433+0000 D ASIO [conn42] startCommand: RemoteCommand 138 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.434+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.434+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.434+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.434+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.470+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.470+0000 D ASIO [conn42] Request 138 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393849, 316), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393849, 201), $clusterTime: { clusterTime: Timestamp(1547393849, 380), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.470+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393849, 316), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393849, 201), $clusterTime: { clusterTime: Timestamp(1547393849, 380), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.470+0000 D SHARDING [conn42] Command end db: config msg id: 79 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.470+0000 I COMMAND [conn42] query config.shards command: { find: "shards", filter: {}, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:1834 36ms Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.471+0000 D SHARDING [conn42] Command begin db: config msg id: 81 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.471+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 139 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.471+0000 D ASIO [conn42] startCommand: RemoteCommand 139 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.471+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.471+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.471+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.471+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.507+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.507+0000 D ASIO [conn42] Request 139 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393849, 316), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393849, 201), $clusterTime: { clusterTime: Timestamp(1547393849, 394), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.507+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393849, 316), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393849, 201), $clusterTime: { clusterTime: Timestamp(1547393849, 394), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.508+0000 D SHARDING [conn42] Command end db: config msg id: 81 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.508+0000 I COMMAND [conn42] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 36ms Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.508+0000 D SHARDING [conn42] Command begin db: config msg id: 83 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.508+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b39a1824195fadc106d Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.508+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 140 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.508+0000 D ASIO [conn42] startCommand: RemoteCommand 140 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.508+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.508+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.508+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.508+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.592+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.592+0000 D ASIO [ShardRegistry] Request 140 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393849, 395), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393849, 316), t: 1 }, lastOpVisible: { ts: Timestamp(1547393849, 316), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393842, 676), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393849, 316), $clusterTime: { clusterTime: Timestamp(1547393849, 409), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.592+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393849, 395), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393849, 316), t: 1 }, lastOpVisible: { ts: Timestamp(1547393849, 316), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393842, 676), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393849, 316), $clusterTime: { clusterTime: Timestamp(1547393849, 409), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.592+0000 D SHARDING [conn42] Command end db: config msg id: 83 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.592+0000 I COMMAND [conn42] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 84ms Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.593+0000 D SHARDING [conn42] Command begin db: config msg id: 85 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.593+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b39a1824195fadc106f Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.593+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 141 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.593+0000 D ASIO [conn42] startCommand: RemoteCommand 141 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.593+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.593+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.593+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.593+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.629+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.629+0000 D ASIO [ShardRegistry] Request 141 finished with response: { cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393849, 395), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393849, 316), t: 1 }, lastOpVisible: { ts: Timestamp(1547393849, 316), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393842, 676), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393849, 316), $clusterTime: { clusterTime: Timestamp(1547393849, 518), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.629+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393849, 395), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393849, 316), t: 1 }, lastOpVisible: { ts: Timestamp(1547393849, 316), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393842, 676), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393849, 316), $clusterTime: { clusterTime: Timestamp(1547393849, 518), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.630+0000 D SHARDING [conn42] Command end db: config msg id: 85 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.630+0000 I COMMAND [conn42] query config.databases command: { aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:270 36ms Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.630+0000 D SHARDING [conn42] Command begin db: config msg id: 87 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.630+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 142 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.630+0000 D ASIO [conn42] startCommand: RemoteCommand 142 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.630+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.630+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.630+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.630+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.667+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.667+0000 D ASIO [conn42] Request 142 finished with response: { n: 3, ok: 1.0, operationTime: Timestamp(1547393849, 395), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393849, 395), $clusterTime: { clusterTime: Timestamp(1547393849, 518), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.667+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 3, ok: 1.0, operationTime: Timestamp(1547393849, 395), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393849, 395), $clusterTime: { clusterTime: Timestamp(1547393849, 518), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.667+0000 D SHARDING [conn42] Command end db: config msg id: 87 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.667+0000 I COMMAND [conn42] query config.collections command: { count: "collections", query: { dropped: false }, $db: "config" } numYields:0 reslen:210 37ms Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.669+0000 D SHARDING [conn42] Command begin db: config msg id: 89 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.669+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 143 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393249669) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.669+0000 D ASIO [conn42] startCommand: RemoteCommand 143 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393249669) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.669+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.669+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.669+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.669+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.707+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.707+0000 D ASIO [conn42] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Request 143 finished with response: { cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393847089), up: 3487044, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393845448), up: 3433181, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393844520), up: 3486942, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393848482), up: 787, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393845302), up: 74791, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393842249), up: 74814, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393841276), up: 74787, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393848848), up: 74766, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393845235), up: 74763, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393845908), up: 74735, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.node.gce-us-eas Jan 13 15:37:29 ivy mongos[27723]: t1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393843379), up: 74705, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393848412), up: 74738, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393842179), up: 74704, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393846548), up: 74682, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393839610), up: 74676, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393848069), up: 74629, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393841613), up: 74652, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393845844), up: 74656, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393846754), up: 74628, waiting: true }, { _id: "jacob:27 .......... 5227, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393840063), up: 75191, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393848882), up: 75236, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393840141), up: 75986, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", Jan 13 15:37:29 ivy mongos[27723]: "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393846740), up: 76052, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393846119), up: 76052, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393844941), up: 75991, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393847944), up: 76583, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393847557), up: 76583, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393844257), up: 76519, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393844221), up: 76369, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393844255), up: 76519, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393844225), up: 76307, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393848306), up: 76374, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393844225), up: 76307, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393844227), up: 76182, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393845688), up: 76246, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393845686), Jan 13 15:37:29 ivy mongos[27723]: up: 76247, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393842760), up: 132, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393844127), up: 76120, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393842581), up: 76180, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393849, 395), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393849, 395), $clusterTime: { clusterTime: Timestamp(1547393849, 664), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.707+0000 D EXECUTOR [conn42] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393847089), up: 3487044, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393845448), up: 3433181, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393844520), up: 3486942, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393848482), up: 787, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393845302), up: 74791, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393842249), up: 74814, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393841276), up: 74787, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393848848), up: 74766, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393845235), up: 74763, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393845908), up: 74735, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.no Jan 13 15:37:29 ivy mongos[27723]: de.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393843379), up: 74705, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393848412), up: 74738, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393842179), up: 74704, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393846548), up: 74682, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393839610), up: 74676, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393848069), up: 74629, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393841613), up: 74652, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393845844), up: 74656, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393846754), up: 74628, waiting: true }, { _ .......... 5227, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393840063), up: 75191, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393848882), up: 75236, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393840141), up: 75986, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", Jan 13 15:37:29 ivy mongos[27723]: "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393846740), up: 76052, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393846119), up: 76052, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393844941), up: 75991, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393847944), up: 76583, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393847557), up: 76583, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393844257), up: 76519, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393844221), up: 76369, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393844255), up: 76519, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393844225), up: 76307, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393848306), up: 76374, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393844225), up: 76307, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393844227), up: 76182, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393845688), up: 76246, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393845686), Jan 13 15:37:29 ivy mongos[27723]: up: 76247, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393842760), up: 132, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393844127), up: 76120, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393842581), up: 76180, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393849, 395), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393849, 395), $clusterTime: { clusterTime: Timestamp(1547393849, 664), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.709+0000 D SHARDING [conn42] Command end db: config msg id: 89 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.709+0000 I COMMAND [conn42] query config.mongos command: { find: "mongos", filter: { ping: { $gte: new Date(1547393249669) } }, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:63 reslen:9894 40ms Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.710+0000 D SHARDING [conn42] Command begin db: config msg id: 91 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.710+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 144 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.710+0000 D ASIO [conn42] startCommand: RemoteCommand 144 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.710+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.710+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.710+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.710+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.747+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.747+0000 D ASIO [conn42] Request 144 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393849, 666), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393849, 395), $clusterTime: { clusterTime: Timestamp(1547393849, 666), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.747+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393849, 666), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393849, 395), $clusterTime: { clusterTime: Timestamp(1547393849, 666), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.747+0000 D SHARDING [conn42] Command end db: config msg id: 91 Jan 13 15:37:29 ivy mongos[27723]: 2019-01-13T15:37:29.747+0000 I COMMAND [conn42] query config.locks command: { find: "locks", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:241 37ms Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.519+0000 D TRACKING [UserCacheInvalidator] Cmd: NotSet, TrackingId: 5c3b5b3ba1824195fadc1074 Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.519+0000 D EXECUTOR [UserCacheInvalidator] Scheduling remote command request: RemoteCommand 145 -- target:ira.node.gce-us-east1.admiral:27019 db:admin expDate:2019-01-13T15:38:01.519+0000 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.519+0000 D ASIO [UserCacheInvalidator] startCommand: RemoteCommand 145 -- target:ira.node.gce-us-east1.admiral:27019 db:admin expDate:2019-01-13T15:38:01.519+0000 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.519+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.519+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.519+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.519+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.538+0000 D SHARDING [conn42] Command begin db: admin msg id: 93 Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.539+0000 D NETWORK [conn42] Starting server-side compression negotiation Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.539+0000 D NETWORK [conn42] Compression negotiation not requested by client Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.539+0000 D SHARDING [conn42] Command end db: admin msg id: 93 Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.539+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.555+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.555+0000 D ASIO [ShardRegistry] Request 145 finished with response: { cacheGeneration: ObjectId('5c002e8aad899acfb0bbfd1e'), ok: 1.0, operationTime: Timestamp(1547393851, 552), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393851, 68), t: 1 }, lastOpVisible: { ts: Timestamp(1547393851, 68), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393842, 676), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393851, 68), $clusterTime: { clusterTime: Timestamp(1547393851, 552), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.555+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cacheGeneration: ObjectId('5c002e8aad899acfb0bbfd1e'), ok: 1.0, operationTime: Timestamp(1547393851, 552), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393851, 68), t: 1 }, lastOpVisible: { ts: Timestamp(1547393851, 68), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393842, 676), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393851, 68), $clusterTime: { clusterTime: Timestamp(1547393851, 552), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.555+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.565+0000 D SHARDING [shard registry reload] Reloading shardRegistry Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.565+0000 D TRACKING [shard registry reload] Cmd: NotSet, TrackingId: 5c3b5b3ba1824195fadc1077 Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.565+0000 D EXECUTOR [shard registry reload] Scheduling remote command request: RemoteCommand 146 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:38:01.565+0000 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393851, 68), t: 1 } }, maxTimeMS: 30000 } Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.565+0000 D ASIO [shard registry reload] startCommand: RemoteCommand 146 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:38:01.565+0000 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393851, 68), t: 1 } }, maxTimeMS: 30000 } Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.565+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.565+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.565+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.565+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.605+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.605+0000 D ASIO [ShardRegistry] Request 146 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393851, 552), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393851, 68), t: 1 }, lastOpVisible: { ts: Timestamp(1547393851, 68), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, las Jan 13 15:37:31 ivy mongos[27723]: tCommittedOpTime: Timestamp(1547393851, 68), $clusterTime: { clusterTime: Timestamp(1547393851, 552), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.605+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393851, 552), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393851, 68), t: 1 }, lastOpVisible: { ts: Timestamp(1547393851, 68), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('00000000000000000000 Jan 13 15:37:31 ivy mongos[27723]: 0000') }, lastCommittedOpTime: Timestamp(1547393851, 68), $clusterTime: { clusterTime: Timestamp(1547393851, 552), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.605+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.605+0000 D SHARDING [shard registry reload] found 7 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1547393851, 68), t: 1 } Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.606+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017 Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.606+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_east1, with CS sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017 Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.606+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017 Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.606+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_central1, with CS sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017 Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.606+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017 Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.606+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_west1, with CS sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017 Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.606+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017 Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.606+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_europe_west1, with CS sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017 Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.606+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017 Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.606+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_europe_west2, with CS sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017 Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.606+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017 Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.606+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_europe_west3, with CS sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017 Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.606+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017 Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.606+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_east1_2, with CS sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017 Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.606+0000 D SHARDING [shard registry reload] Adding shard config, with CS sessions_config/ira.node.gce-us-east1.admiral:27019,jasper.node.gce-us-west1.admiral:27019,kratos.node.gce-europe-west3.admiral:27019,leon.node.gce-us-east1.admiral:27019,mateo.node.gce-us-west1.admiral:27019,newton.node.gce-europe-west3.admiral:27019 Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.714+0000 D TRACKING [replSetDistLockPinger] Cmd: NotSet, TrackingId: 5c3b5b3ba1824195fadc1079 Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.714+0000 D EXECUTOR [replSetDistLockPinger] Scheduling remote command request: RemoteCommand 147 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:01.714+0000 cmd:{ findAndModify: "lockpings", query: { _id: "ivy:27018:1547393707:-6945163188777852108" }, update: { $set: { ping: new Date(1547393851714) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.714+0000 D ASIO [replSetDistLockPinger] startCommand: RemoteCommand 147 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:01.714+0000 cmd:{ findAndModify: "lockpings", query: { _id: "ivy:27018:1547393707:-6945163188777852108" }, update: { $set: { ping: new Date(1547393851714) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.715+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.715+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.715+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.715+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.898+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_config Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.898+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.934+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.934+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host ira.node.gce-us-east1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: true, secondary: false, primary: "ira.node.gce-us-east1.admiral:27019", me: "ira.node.gce-us-east1.admiral:27019", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1547393851, 900), t: 1 }, lastWriteDate: new Date(1547393851000), majorityOpTime: { ts: Timestamp(1547393851, 706), t: 1 }, majorityWriteDate: new Date(1547393851000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393851914), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393851, 900), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393851, 706), $clusterTime: { clusterTime: Timestamp(1547393851, 969), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.934+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ira.node.gce-us-east1.admiral:27019 lastWriteDate to 2019-01-13T15:37:31.000+0000 Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.934+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ira.node.gce-us-east1.admiral:27019 opTime to { ts: Timestamp(1547393851, 900), t: 1 } Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.934+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.942+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.943+0000 D ASIO [ShardRegistry] Request 147 finished with response: { lastErrorObject: { n: 1, updatedExisting: true }, value: { _id: "ivy:27018:1547393707:-6945163188777852108", ping: new Date(1547393821515) }, ok: 1.0, operationTime: Timestamp(1547393851, 707), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393851, 707), t: 1 }, lastOpVisible: { ts: Timestamp(1547393851, 707), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393851, 707), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393851, 707), $clusterTime: { clusterTime: Timestamp(1547393851, 969), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.943+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ lastErrorObject: { n: 1, updatedExisting: true }, value: { _id: "ivy:27018:1547393707:-6945163188777852108", ping: new Date(1547393821515) }, ok: 1.0, operationTime: Timestamp(1547393851, 707), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393851, 707), t: 1 }, lastOpVisible: { ts: Timestamp(1547393851, 707), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393851, 707), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393851, 707), $clusterTime: { clusterTime: Timestamp(1547393851, 969), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:31 ivy mongos[27723]: 2019-01-13T15:37:31.943+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.040+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.040+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host kratos.node.gce-europe-west3.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "kratos.node.gce-europe-west3.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393851, 900), t: 1 }, lastWriteDate: new Date(1547393851000), majorityOpTime: { ts: Timestamp(1547393851, 706), t: 1 }, majorityWriteDate: new Date(1547393851000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393851982), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393851, 900), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393851, 706), $clusterTime: { clusterTime: Timestamp(1547393851, 1080), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.040+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating kratos.node.gce-europe-west3.admiral:27019 lastWriteDate to 2019-01-13T15:37:31.000+0000 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.040+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating kratos.node.gce-europe-west3.admiral:27019 opTime to { ts: Timestamp(1547393851, 900), t: 1 } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.040+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.081+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.081+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host mateo.node.gce-us-west1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "mateo.node.gce-us-west1.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393851, 900), t: 1 }, lastWriteDate: new Date(1547393851000), majorityOpTime: { ts: Timestamp(1547393851, 900), t: 1 }, majorityWriteDate: new Date(1547393851000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393852056), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393851, 900), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393851, 900), $clusterTime: { clusterTime: Timestamp(1547393851, 1080), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.081+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating mateo.node.gce-us-west1.admiral:27019 lastWriteDate to 2019-01-13T15:37:31.000+0000 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.081+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating mateo.node.gce-us-west1.admiral:27019 opTime to { ts: Timestamp(1547393851, 900), t: 1 } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.081+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.187+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.188+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host newton.node.gce-europe-west3.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "newton.node.gce-europe-west3.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393851, 900), t: 1 }, lastWriteDate: new Date(1547393851000), majorityOpTime: { ts: Timestamp(1547393851, 900), t: 1 }, majorityWriteDate: new Date(1547393851000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393852130), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393851, 900), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393851, 900), $clusterTime: { clusterTime: Timestamp(1547393851, 1080), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.188+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating newton.node.gce-europe-west3.admiral:27019 lastWriteDate to 2019-01-13T15:37:31.000+0000 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.188+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating newton.node.gce-europe-west3.admiral:27019 opTime to { ts: Timestamp(1547393851, 900), t: 1 } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.188+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.225+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.226+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host leon.node.gce-us-east1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "leon.node.gce-us-east1.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393852, 74), t: 1 }, lastWriteDate: new Date(1547393852000), majorityOpTime: { ts: Timestamp(1547393851, 900), t: 1 }, majorityWriteDate: new Date(1547393851000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393852202), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393852, 74), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393851, 900), $clusterTime: { clusterTime: Timestamp(1547393852, 74), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.226+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating leon.node.gce-us-east1.admiral:27019 lastWriteDate to 2019-01-13T15:37:32.000+0000 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.226+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating leon.node.gce-us-east1.admiral:27019 opTime to { ts: Timestamp(1547393852, 74), t: 1 } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.226+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.264+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.265+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host jasper.node.gce-us-west1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "jasper.node.gce-us-west1.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393852, 74), t: 1 }, lastWriteDate: new Date(1547393852000), majorityOpTime: { ts: Timestamp(1547393851, 900), t: 1 }, majorityWriteDate: new Date(1547393851000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393852242), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393852, 74), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393851, 900), $clusterTime: { clusterTime: Timestamp(1547393852, 74), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.265+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jasper.node.gce-us-west1.admiral:27019 lastWriteDate to 2019-01-13T15:37:32.000+0000 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.265+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jasper.node.gce-us-west1.admiral:27019 opTime to { ts: Timestamp(1547393852, 74), t: 1 } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.265+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_config took 366 msec Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.265+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_east1 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.265+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.302+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.302+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host phil.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "zeta.node.gce-us-east1.admiral:27017", "phil.node.gce-us-east1.admiral:27017", "bambi.node.gce-us-central1.admiral:27017" ], arbiters: [ "elrond.node.gce-us-west1.admiral:27017", "dale.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1", setVersion: 19, ismaster: true, secondary: false, primary: "phil.node.gce-us-east1.admiral:27017", me: "phil.node.gce-us-east1.admiral:27017", electionId: ObjectId('7fffffff0000000000000016'), lastWrite: { opTime: { ts: Timestamp(1547393852, 219), t: 22 }, lastWriteDate: new Date(1547393852000), majorityOpTime: { ts: Timestamp(1547393852, 133), t: 22 }, majorityWriteDate: new Date(1547393852000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393852279), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393852, 219), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000016') }, lastCommittedOpTime: Timestamp(1547393852, 133), $configServerState: { opTime: { ts: Timestamp(1547393851, 900), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393852, 219), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.302+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating phil.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:37:32.000+0000 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.302+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating phil.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393852, 219), t: 22 } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.302+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.340+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.340+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host zeta.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "zeta.node.gce-us-east1.admiral:27017", "phil.node.gce-us-east1.admiral:27017", "bambi.node.gce-us-central1.admiral:27017" ], arbiters: [ "elrond.node.gce-us-west1.admiral:27017", "dale.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1", setVersion: 19, ismaster: false, secondary: true, primary: "phil.node.gce-us-east1.admiral:27017", me: "zeta.node.gce-us-east1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393852, 231), t: 22 }, lastWriteDate: new Date(1547393852000), majorityOpTime: { ts: Timestamp(1547393852, 171), t: 22 }, majorityWriteDate: new Date(1547393852000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393852316), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393852, 231), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393852, 171), $configServerState: { opTime: { ts: Timestamp(1547393848, 754), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393852, 253), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.340+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating zeta.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:37:32.000+0000 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.340+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating zeta.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393852, 231), t: 22 } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.340+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.342+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.342+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host bambi.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "zeta.node.gce-us-east1.admiral:27017", "phil.node.gce-us-east1.admiral:27017", "bambi.node.gce-us-central1.admiral:27017" ], arbiters: [ "elrond.node.gce-us-west1.admiral:27017", "dale.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1", setVersion: 19, ismaster: false, secondary: true, primary: "phil.node.gce-us-east1.admiral:27017", me: "bambi.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393852, 212), t: 22 }, lastWriteDate: new Date(1547393852000), majorityOpTime: { ts: Timestamp(1547393852, 171), t: 22 }, majorityWriteDate: new Date(1547393852000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393852337), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393852, 212), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393852, 171), $configServerState: { opTime: { ts: Timestamp(1547393836, 464), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393852, 277), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.342+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating bambi.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:37:32.000+0000 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.342+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating bambi.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393852, 212), t: 22 } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.342+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_east1 took 77 msec Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.342+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_central1 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.342+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.343+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.343+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host camden.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "percy.node.gce-us-central1.admiral:27017", "umbra.node.gce-us-west1.admiral:27017", "camden.node.gce-us-central1.admiral:27017" ], arbiters: [ "desmond.node.gce-us-east1.admiral:27017", "flint.node.gce-us-west1.admiral:27017" ], setName: "sessions_gce_us_central1", setVersion: 6, ismaster: true, secondary: false, primary: "camden.node.gce-us-central1.admiral:27017", me: "camden.node.gce-us-central1.admiral:27017", electionId: ObjectId('7fffffff0000000000000004'), lastWrite: { opTime: { ts: Timestamp(1547393852, 311), t: 4 }, lastWriteDate: new Date(1547393852000), majorityOpTime: { ts: Timestamp(1547393852, 138), t: 4 }, majorityWriteDate: new Date(1547393852000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393852339), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393852, 311), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000004') }, lastCommittedOpTime: Timestamp(1547393852, 138), $configServerState: { opTime: { ts: Timestamp(1547393852, 9), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393852, 311), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.343+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating camden.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:37:32.000+0000 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.343+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating camden.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393852, 311), t: 4 } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.344+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.383+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.383+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host umbra.node.gce-us-west1.admiral:27017 based on ismaster reply: { hosts: [ "percy.node.gce-us-central1.admiral:27017", "umbra.node.gce-us-west1.admiral:27017", "camden.node.gce-us-central1.admiral:27017" ], arbiters: [ "desmond.node.gce-us-east1.admiral:27017", "flint.node.gce-us-west1.admiral:27017" ], setName: "sessions_gce_us_central1", setVersion: 6, ismaster: false, secondary: true, primary: "camden.node.gce-us-central1.admiral:27017", me: "umbra.node.gce-us-west1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393852, 257), t: 4 }, lastWriteDate: new Date(1547393852000), majorityOpTime: { ts: Timestamp(1547393852, 138), t: 4 }, majorityWriteDate: new Date(1547393852000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393852358), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393852, 257), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393852, 138), $configServerState: { opTime: { ts: Timestamp(1547393829, 251), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393852, 258), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.383+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating umbra.node.gce-us-west1.admiral:27017 lastWriteDate to 2019-01-13T15:37:32.000+0000 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.383+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating umbra.node.gce-us-west1.admiral:27017 opTime to { ts: Timestamp(1547393852, 257), t: 4 } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.383+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.385+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.385+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host percy.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "percy.node.gce-us-central1.admiral:27017", "umbra.node.gce-us-west1.admiral:27017", "camden.node.gce-us-central1.admiral:27017" ], arbiters: [ "desmond.node.gce-us-east1.admiral:27017", "flint.node.gce-us-west1.admiral:27017" ], setName: "sessions_gce_us_central1", setVersion: 6, ismaster: false, secondary: true, primary: "camden.node.gce-us-central1.admiral:27017", me: "percy.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393852, 323), t: 4 }, lastWriteDate: new Date(1547393852000), majorityOpTime: { ts: Timestamp(1547393852, 194), t: 4 }, majorityWriteDate: new Date(1547393852000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393852379), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393852, 323), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393852, 194), $configServerState: { opTime: { ts: Timestamp(1547393850, 204), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393852, 324), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.385+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating percy.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:37:32.000+0000 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.385+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating percy.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393852, 323), t: 4 } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.385+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_central1 took 42 msec Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.385+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_west1 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.385+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.424+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.425+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host tony.node.gce-us-west1.admiral:27017 based on ismaster reply: { hosts: [ "william.node.gce-us-west1.admiral:27017", "tony.node.gce-us-west1.admiral:27017", "chloe.node.gce-us-central1.admiral:27017" ], arbiters: [ "sarah.node.gce-us-east1.admiral:27017", "gerda.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_west1", setVersion: 82625, ismaster: true, secondary: false, primary: "tony.node.gce-us-west1.admiral:27017", me: "tony.node.gce-us-west1.admiral:27017", electionId: ObjectId('7fffffff000000000000001c'), lastWrite: { opTime: { ts: Timestamp(1547393852, 288), t: 28 }, lastWriteDate: new Date(1547393852000), majorityOpTime: { ts: Timestamp(1547393852, 182), t: 28 }, majorityWriteDate: new Date(1547393852000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393852400), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393852, 288), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff000000000000001c') }, lastCommittedOpTime: Timestamp(1547393852, 182), $configServerState: { opTime: { ts: Timestamp(1547393852, 9), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393852, 288), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.425+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating tony.node.gce-us-west1.admiral:27017 lastWriteDate to 2019-01-13T15:37:32.000+0000 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.425+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating tony.node.gce-us-west1.admiral:27017 opTime to { ts: Timestamp(1547393852, 288), t: 28 } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.425+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.464+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.464+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host william.node.gce-us-west1.admiral:27017 based on ismaster reply: { hosts: [ "william.node.gce-us-west1.admiral:27017", "tony.node.gce-us-west1.admiral:27017", "chloe.node.gce-us-central1.admiral:27017" ], arbiters: [ "sarah.node.gce-us-east1.admiral:27017", "gerda.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_west1", setVersion: 82625, ismaster: false, secondary: true, primary: "tony.node.gce-us-west1.admiral:27017", me: "william.node.gce-us-west1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393852, 348), t: 28 }, lastWriteDate: new Date(1547393852000), majorityOpTime: { ts: Timestamp(1547393852, 207), t: 28 }, majorityWriteDate: new Date(1547393852000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393852440), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393852, 348), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393852, 207), $configServerState: { opTime: { ts: Timestamp(1547393840, 551), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393852, 369), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.464+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating william.node.gce-us-west1.admiral:27017 lastWriteDate to 2019-01-13T15:37:32.000+0000 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.464+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating william.node.gce-us-west1.admiral:27017 opTime to { ts: Timestamp(1547393852, 348), t: 28 } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.464+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.466+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.466+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host chloe.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "william.node.gce-us-west1.admiral:27017", "tony.node.gce-us-west1.admiral:27017", "chloe.node.gce-us-central1.admiral:27017" ], arbiters: [ "sarah.node.gce-us-east1.admiral:27017", "gerda.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_west1", setVersion: 82625, ismaster: false, secondary: true, primary: "tony.node.gce-us-west1.admiral:27017", me: "chloe.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393852, 338), t: 28 }, lastWriteDate: new Date(1547393852000), majorityOpTime: { ts: Timestamp(1547393852, 207), t: 28 }, majorityWriteDate: new Date(1547393852000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393852461), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393852, 338), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393852, 207), $configServerState: { opTime: { ts: Timestamp(1547393842, 104), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393852, 339), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.466+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating chloe.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:37:32.000+0000 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.466+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating chloe.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393852, 338), t: 28 } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.466+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_west1 took 81 msec Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.466+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_europe_west1 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.466+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.567+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.567+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host vivi.node.gce-europe-west1.admiral:27017 based on ismaster reply: { hosts: [ "vivi.node.gce-europe-west1.admiral:27017", "hilda.node.gce-europe-west2.admiral:27017" ], arbiters: [ "hubert.node.gce-europe-west3.admiral:27017" ], setName: "sessions_gce_europe_west1", setVersion: 4, ismaster: true, secondary: false, primary: "vivi.node.gce-europe-west1.admiral:27017", me: "vivi.node.gce-europe-west1.admiral:27017", electionId: ObjectId('7fffffff0000000000000009'), lastWrite: { opTime: { ts: Timestamp(1547393852, 592), t: 9 }, lastWriteDate: new Date(1547393852000), majorityOpTime: { ts: Timestamp(1547393852, 519), t: 9 }, majorityWriteDate: new Date(1547393852000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393852512), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393852, 592), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000009') }, lastCommittedOpTime: Timestamp(1547393852, 519), $configServerState: { opTime: { ts: Timestamp(1547393852, 74), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393852, 592), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.567+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating vivi.node.gce-europe-west1.admiral:27017 lastWriteDate to 2019-01-13T15:37:32.000+0000 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.567+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating vivi.node.gce-europe-west1.admiral:27017 opTime to { ts: Timestamp(1547393852, 592), t: 9 } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.567+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.663+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.663+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host hilda.node.gce-europe-west2.admiral:27017 based on ismaster reply: { hosts: [ "vivi.node.gce-europe-west1.admiral:27017", "hilda.node.gce-europe-west2.admiral:27017" ], arbiters: [ "hubert.node.gce-europe-west3.admiral:27017" ], setName: "sessions_gce_europe_west1", setVersion: 4, ismaster: false, secondary: true, primary: "vivi.node.gce-europe-west1.admiral:27017", me: "hilda.node.gce-europe-west2.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393852, 651), t: 9 }, lastWriteDate: new Date(1547393852000), majorityOpTime: { ts: Timestamp(1547393852, 640), t: 9 }, majorityWriteDate: new Date(1547393852000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393852611), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393852, 651), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000008') }, lastCommittedOpTime: Timestamp(1547393852, 640), $configServerState: { opTime: { ts: Timestamp(1547393847, 1008), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393852, 652), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.663+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating hilda.node.gce-europe-west2.admiral:27017 lastWriteDate to 2019-01-13T15:37:32.000+0000 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.663+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating hilda.node.gce-europe-west2.admiral:27017 opTime to { ts: Timestamp(1547393852, 651), t: 9 } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.663+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_europe_west1 took 196 msec Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.663+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_europe_west2 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.663+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.758+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.758+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host ignis.node.gce-europe-west2.admiral:27017 based on ismaster reply: { hosts: [ "ignis.node.gce-europe-west2.admiral:27017", "keith.node.gce-europe-west3.admiral:27017" ], arbiters: [ "francis.node.gce-europe-west1.admiral:27017" ], setName: "sessions_gce_europe_west2", setVersion: 6, ismaster: true, secondary: false, primary: "ignis.node.gce-europe-west2.admiral:27017", me: "ignis.node.gce-europe-west2.admiral:27017", electionId: ObjectId('7fffffff0000000000000004'), lastWrite: { opTime: { ts: Timestamp(1547393852, 668), t: 4 }, lastWriteDate: new Date(1547393852000), majorityOpTime: { ts: Timestamp(1547393852, 668), t: 4 }, majorityWriteDate: new Date(1547393852000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393852706), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393852, 668), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000004') }, lastCommittedOpTime: Timestamp(1547393852, 668), $configServerState: { opTime: { ts: Timestamp(1547393852, 310), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393852, 668), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.758+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ignis.node.gce-europe-west2.admiral:27017 lastWriteDate to 2019-01-13T15:37:32.000+0000 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.758+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ignis.node.gce-europe-west2.admiral:27017 opTime to { ts: Timestamp(1547393852, 668), t: 4 } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.758+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.865+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.865+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host keith.node.gce-europe-west3.admiral:27017 based on ismaster reply: { hosts: [ "ignis.node.gce-europe-west2.admiral:27017", "keith.node.gce-europe-west3.admiral:27017" ], arbiters: [ "francis.node.gce-europe-west1.admiral:27017" ], setName: "sessions_gce_europe_west2", setVersion: 6, ismaster: false, secondary: true, primary: "ignis.node.gce-europe-west2.admiral:27017", me: "keith.node.gce-europe-west3.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393852, 839), t: 4 }, lastWriteDate: new Date(1547393852000), majorityOpTime: { ts: Timestamp(1547393852, 668), t: 4 }, majorityWriteDate: new Date(1547393852000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393852807), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393852, 839), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393852, 668), $configServerState: { opTime: { ts: Timestamp(1547393852, 185), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393852, 841), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.865+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating keith.node.gce-europe-west3.admiral:27017 lastWriteDate to 2019-01-13T15:37:32.000+0000 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.865+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating keith.node.gce-europe-west3.admiral:27017 opTime to { ts: Timestamp(1547393852, 839), t: 4 } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.865+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_europe_west2 took 202 msec Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.865+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_europe_west3 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.865+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.971+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.971+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host albert.node.gce-europe-west3.admiral:27017 based on ismaster reply: { hosts: [ "albert.node.gce-europe-west3.admiral:27017", "jordan.node.gce-europe-west1.admiral:27017" ], arbiters: [ "garry.node.gce-europe-west2.admiral:27017" ], setName: "sessions_gce_europe_west3", setVersion: 6, ismaster: true, secondary: false, primary: "albert.node.gce-europe-west3.admiral:27017", me: "albert.node.gce-europe-west3.admiral:27017", electionId: ObjectId('7fffffff000000000000000a'), lastWrite: { opTime: { ts: Timestamp(1547393852, 970), t: 10 }, lastWriteDate: new Date(1547393852000), majorityOpTime: { ts: Timestamp(1547393852, 923), t: 10 }, majorityWriteDate: new Date(1547393852000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393852913), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393852, 970), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff000000000000000a') }, lastCommittedOpTime: Timestamp(1547393852, 923), $configServerState: { opTime: { ts: Timestamp(1547393852, 659), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393852, 971), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.971+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating albert.node.gce-europe-west3.admiral:27017 lastWriteDate to 2019-01-13T15:37:32.000+0000 Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.971+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating albert.node.gce-europe-west3.admiral:27017 opTime to { ts: Timestamp(1547393852, 970), t: 10 } Jan 13 15:37:32 ivy mongos[27723]: 2019-01-13T15:37:32.972+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.072+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.072+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host jordan.node.gce-europe-west1.admiral:27017 based on ismaster reply: { hosts: [ "albert.node.gce-europe-west3.admiral:27017", "jordan.node.gce-europe-west1.admiral:27017" ], arbiters: [ "garry.node.gce-europe-west2.admiral:27017" ], setName: "sessions_gce_europe_west3", setVersion: 6, ismaster: false, secondary: true, primary: "albert.node.gce-europe-west3.admiral:27017", me: "jordan.node.gce-europe-west1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393852, 1006), t: 10 }, lastWriteDate: new Date(1547393852000), majorityOpTime: { ts: Timestamp(1547393852, 996), t: 10 }, majorityWriteDate: new Date(1547393852000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393853017), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393852, 1006), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000009') }, lastCommittedOpTime: Timestamp(1547393852, 996), $configServerState: { opTime: { ts: Timestamp(1547393828, 93), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393852, 1006), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.072+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jordan.node.gce-europe-west1.admiral:27017 lastWriteDate to 2019-01-13T15:37:32.000+0000 Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.072+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jordan.node.gce-europe-west1.admiral:27017 opTime to { ts: Timestamp(1547393852, 1006), t: 10 } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.072+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_europe_west3 took 206 msec Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.072+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_east1_2 Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.072+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.110+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.110+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host queen.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: true, secondary: false, primary: "queen.node.gce-us-east1.admiral:27017", me: "queen.node.gce-us-east1.admiral:27017", electionId: ObjectId('7fffffff0000000000000003'), lastWrite: { opTime: { ts: Timestamp(1547393853, 62), t: 3 }, lastWriteDate: new Date(1547393853000), majorityOpTime: { ts: Timestamp(1547393853, 4), t: 3 }, majorityWriteDate: new Date(1547393853000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393853089), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393853, 62), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000003') }, lastCommittedOpTime: Timestamp(1547393853, 4), $configServerState: { opTime: { ts: Timestamp(1547393852, 676), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393853, 62), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.110+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating queen.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:37:33.000+0000 Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.110+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating queen.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393853, 62), t: 3 } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.110+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.112+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.112+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host ralph.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: false, secondary: true, primary: "queen.node.gce-us-east1.admiral:27017", me: "ralph.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393853, 33), t: 3 }, lastWriteDate: new Date(1547393853000), majorityOpTime: { ts: Timestamp(1547393852, 1007), t: 3 }, majorityWriteDate: new Date(1547393852000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393853106), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393853, 33), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393852, 1007), $configServerState: { opTime: { ts: Timestamp(1547393840, 368), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393853, 36), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.112+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ralph.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:37:33.000+0000 Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.112+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ralph.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393853, 33), t: 3 } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.112+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.115+0000 D TRACKING [Uptime reporter] Cmd: NotSet, TrackingId: 5c3b5b3da1824195fadc107b Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.115+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 148 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:03.115+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393853115), up: 142, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.116+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 148 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:03.115+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393853115), up: 142, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.116+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.116+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.116+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.116+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.150+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.150+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host april.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: false, secondary: true, primary: "queen.node.gce-us-east1.admiral:27017", me: "april.node.gce-us-east1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393853, 102), t: 3 }, lastWriteDate: new Date(1547393853000), majorityOpTime: { ts: Timestamp(1547393853, 4), t: 3 }, majorityWriteDate: new Date(1547393853000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393853127), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393853, 102), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393853, 4), $configServerState: { opTime: { ts: Timestamp(1547393850, 774), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393853, 170), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.150+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating april.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:37:33.000+0000 Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.150+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating april.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393853, 102), t: 3 } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.150+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_east1_2 took 78 msec Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.312+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.312+0000 D ASIO [ShardRegistry] Request 148 finished with response: { n: 1, nModified: 1, opTime: { ts: Timestamp(1547393853, 140), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393853, 140), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393853, 140), t: 1 }, lastOpVisible: { ts: Timestamp(1547393853, 140), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393853, 140), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393853, 140), $clusterTime: { clusterTime: Timestamp(1547393853, 272), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.312+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ n: 1, nModified: 1, opTime: { ts: Timestamp(1547393853, 140), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393853, 140), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393853, 140), t: 1 }, lastOpVisible: { ts: Timestamp(1547393853, 140), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393853, 140), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393853, 140), $clusterTime: { clusterTime: Timestamp(1547393853, 272), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.312+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.312+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 149 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:03.312+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393853, 140), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.312+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 149 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:03.312+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393853, 140), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.312+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.312+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.312+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.312+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.349+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.349+0000 D ASIO [ShardRegistry] Request 149 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393853, 140), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393853, 140), t: 1 }, lastOpVisible: { ts: Timestamp(1547393853, 140), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393853, 140), $clusterTime: { clusterTime: Timestamp(1547393853, 272), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.349+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393853, 140), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393853, 140), t: 1 }, lastOpVisible: { ts: Timestamp(1547393853, 140), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393853, 140), $clusterTime: { clusterTime: Timestamp(1547393853, 272), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.349+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.349+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 150 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:03.349+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393853, 140), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.349+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 150 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:03.349+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393853, 140), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.349+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.349+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.349+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.349+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.386+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.386+0000 D ASIO [ShardRegistry] Request 150 finished with response: { cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393853, 140), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393853, 140), t: 1 }, lastOpVisible: { ts: Timestamp(1547393853, 140), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393853, 140), $clusterTime: { clusterTime: Timestamp(1547393853, 272), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.386+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393853, 140), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393853, 140), t: 1 }, lastOpVisible: { ts: Timestamp(1547393853, 140), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393853, 140), $clusterTime: { clusterTime: Timestamp(1547393853, 272), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.386+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.386+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 151 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:38:03.386+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393853, 140), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.386+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 151 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:38:03.386+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393853, 140), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.386+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.386+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.386+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.386+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.425+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.425+0000 D ASIO [ShardRegistry] Request 151 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393853, 140), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393853, 140), t: 1 }, lastOpVisible: { ts: Timestamp(1547393853, 140), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393853, 140), $clusterTime: { clusterTime: Timestamp(1547393853, 272), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.425+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393853, 140), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393853, 140), t: 1 }, lastOpVisible: { ts: Timestamp(1547393853, 140), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393853, 140), $clusterTime: { clusterTime: Timestamp(1547393853, 272), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:33 ivy mongos[27723]: 2019-01-13T15:37:33.425+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.426+0000 D TRACKING [Uptime reporter] Cmd: NotSet, TrackingId: 5c3b5b47a1824195fadc1080 Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.426+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 152 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:13.426+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393863425), up: 153, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.426+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 152 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:13.426+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393863425), up: 153, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.426+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.427+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.427+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.427+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.647+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.647+0000 D ASIO [ShardRegistry] Request 152 finished with response: { n: 1, nModified: 1, opTime: { ts: Timestamp(1547393863, 443), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393863, 443), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393863, 444), t: 1 }, lastOpVisible: { ts: Timestamp(1547393863, 444), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393863, 443), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393863, 444), $clusterTime: { clusterTime: Timestamp(1547393863, 770), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.647+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ n: 1, nModified: 1, opTime: { ts: Timestamp(1547393863, 443), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393863, 443), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393863, 444), t: 1 }, lastOpVisible: { ts: Timestamp(1547393863, 444), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393863, 443), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393863, 444), $clusterTime: { clusterTime: Timestamp(1547393863, 770), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.648+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 153 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:38:13.647+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393863, 444), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.648+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 153 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:38:13.647+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393863, 444), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.648+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.648+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.648+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.648+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.648+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.687+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.688+0000 D ASIO [ShardRegistry] Request 153 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393863, 444), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393863, 444), t: 1 }, lastOpVisible: { ts: Timestamp(1547393863, 444), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393863, 444), $clusterTime: { clusterTime: Timestamp(1547393863, 770), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.688+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393863, 444), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393863, 444), t: 1 }, lastOpVisible: { ts: Timestamp(1547393863, 444), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393863, 444), $clusterTime: { clusterTime: Timestamp(1547393863, 770), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.688+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 154 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:38:13.688+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393863, 444), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.688+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 154 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:38:13.688+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393863, 444), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.688+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.688+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.688+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.688+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.688+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.714+0000 D SHARDING [conn42] Command begin db: admin msg id: 95 Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.714+0000 D SHARDING [conn42] Command end db: admin msg id: 95 Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.714+0000 I COMMAND [conn42] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:178 protocol:op_query 0ms Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.727+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.727+0000 D ASIO [ShardRegistry] Request 154 finished with response: { cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393863, 444), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393863, 444), t: 1 }, lastOpVisible: { ts: Timestamp(1547393863, 444), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393863, 444), $clusterTime: { clusterTime: Timestamp(1547393863, 770), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.727+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393863, 444), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393863, 444), t: 1 }, lastOpVisible: { ts: Timestamp(1547393863, 444), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393863, 444), $clusterTime: { clusterTime: Timestamp(1547393863, 770), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.727+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 155 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:13.727+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393863, 444), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.727+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 155 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:13.727+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393863, 444), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.727+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.727+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.727+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.727+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.727+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.765+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.765+0000 D ASIO [ShardRegistry] Request 155 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393863, 444), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393863, 444), t: 1 }, lastOpVisible: { ts: Timestamp(1547393863, 444), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393863, 444), $clusterTime: { clusterTime: Timestamp(1547393863, 795), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.765+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393863, 444), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393863, 444), t: 1 }, lastOpVisible: { ts: Timestamp(1547393863, 444), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393863, 444), $clusterTime: { clusterTime: Timestamp(1547393863, 795), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:43 ivy mongos[27723]: 2019-01-13T15:37:43.765+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.222+0000 D SHARDING [conn42] Command begin db: admin msg id: 97 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.222+0000 D SHARDING [conn42] Command end db: admin msg id: 97 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.222+0000 I COMMAND [conn42] query admin.1 command: { buildInfo: "1", $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:1340 0ms Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.223+0000 D SHARDING [conn42] Command begin db: admin msg id: 99 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.223+0000 D NETWORK [conn42] Starting server-side compression negotiation Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.223+0000 D NETWORK [conn42] Compression negotiation not requested by client Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.223+0000 D SHARDING [conn42] Command end db: admin msg id: 99 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.223+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.225+0000 D SHARDING [conn42] Command begin db: admin msg id: 101 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.225+0000 D SHARDING [conn42] Command end db: admin msg id: 101 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.225+0000 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $db: "admin" } numYields:0 reslen:10255 protocol:op_query 0ms Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.227+0000 D SHARDING [conn42] Command begin db: config msg id: 103 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.227+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 156 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.227+0000 D ASIO [conn42] startCommand: RemoteCommand 156 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.227+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.227+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.227+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.227+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.264+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.264+0000 D ASIO [conn42] Request 156 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393864, 170), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393863, 990), $clusterTime: { clusterTime: Timestamp(1547393864, 174), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.264+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393864, 170), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393863, 990), $clusterTime: { clusterTime: Timestamp(1547393864, 174), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.264+0000 D SHARDING [conn42] Command end db: config msg id: 103 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.264+0000 I COMMAND [conn42] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 36ms Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.264+0000 D SHARDING [conn42] Command begin db: config msg id: 105 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.264+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b48a1824195fadc108a Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.264+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 157 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.264+0000 D ASIO [conn42] startCommand: RemoteCommand 157 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.264+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.265+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.265+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.265+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.330+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.330+0000 D ASIO [ShardRegistry] Request 157 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393864, 325), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393863, 990), t: 1 }, lastOpVisible: { ts: Timestamp(1547393863, 990), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393863, 443), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393863, 990), $clusterTime: { clusterTime: Timestamp(1547393864, 325), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.330+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393864, 325), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393863, 990), t: 1 }, lastOpVisible: { ts: Timestamp(1547393863, 990), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393863, 443), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393863, 990), $clusterTime: { clusterTime: Timestamp(1547393864, 325), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.330+0000 D SHARDING [conn42] Command end db: config msg id: 105 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.330+0000 I COMMAND [conn42] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 66ms Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.331+0000 D SHARDING [conn42] Command begin db: config msg id: 107 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.331+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 158 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.331+0000 D ASIO [conn42] startCommand: RemoteCommand 158 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.331+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.331+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.331+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.331+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.367+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.367+0000 D ASIO [conn42] Request 158 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393864, 325), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393863, 990), $clusterTime: { clusterTime: Timestamp(1547393864, 399), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.367+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393864, 325), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393863, 990), $clusterTime: { clusterTime: Timestamp(1547393864, 399), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.368+0000 D SHARDING [conn42] Command end db: config msg id: 107 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.368+0000 I COMMAND [conn42] query config.settings command: { find: "settings", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:315 36ms Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.368+0000 D SHARDING [conn42] Command begin db: config msg id: 109 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.368+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b48a1824195fadc108d Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.368+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 159 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393264368) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.368+0000 D ASIO [conn42] startCommand: RemoteCommand 159 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393264368) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.368+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.368+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.368+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.368+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.430+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.430+0000 D ASIO [ShardRegistry] Request 159 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393864, 325), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393864, 170), t: 1 }, lastOpVisible: { ts: Timestamp(1547393864, 170), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393863, 443), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393864, 170), $clusterTime: { clusterTime: Timestamp(1547393864, 407), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.430+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393864, 325), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393864, 170), t: 1 }, lastOpVisible: { ts: Timestamp(1547393864, 170), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393863, 443), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393864, 170), $clusterTime: { clusterTime: Timestamp(1547393864, 407), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.431+0000 D SHARDING [conn42] Command end db: config msg id: 109 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.431+0000 I COMMAND [conn42] query config.changelog command: { aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393264368) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:245 62ms Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.431+0000 D SHARDING [conn42] Command begin db: config msg id: 111 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.431+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 160 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.431+0000 D ASIO [conn42] startCommand: RemoteCommand 160 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.431+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.431+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.431+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.431+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.468+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.468+0000 D ASIO [conn42] Request 160 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393864, 325), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393864, 325), $clusterTime: { clusterTime: Timestamp(1547393864, 461), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.468+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393864, 325), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393864, 325), $clusterTime: { clusterTime: Timestamp(1547393864, 461), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.468+0000 D SHARDING [conn42] Command end db: config msg id: 111 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.468+0000 I COMMAND [conn42] query config.shards command: { find: "shards", filter: {}, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:1834 37ms Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.468+0000 D SHARDING [conn42] Command begin db: config msg id: 113 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.468+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 161 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.468+0000 D ASIO [conn42] startCommand: RemoteCommand 161 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.468+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.468+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.468+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.468+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.505+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.505+0000 D ASIO [conn42] Request 161 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393864, 325), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393864, 325), $clusterTime: { clusterTime: Timestamp(1547393864, 574), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.505+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393864, 325), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393864, 325), $clusterTime: { clusterTime: Timestamp(1547393864, 574), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.505+0000 D SHARDING [conn42] Command end db: config msg id: 113 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.505+0000 I COMMAND [conn42] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 36ms Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.505+0000 D SHARDING [conn42] Command begin db: config msg id: 115 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.505+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b48a1824195fadc1091 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.505+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 162 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.505+0000 D ASIO [conn42] startCommand: RemoteCommand 162 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.505+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.505+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.505+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.505+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.588+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.588+0000 D ASIO [ShardRegistry] Request 162 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393864, 325), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393864, 325), t: 1 }, lastOpVisible: { ts: Timestamp(1547393864, 325), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393863, 443), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393864, 325), $clusterTime: { clusterTime: Timestamp(1547393864, 574), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.588+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393864, 325), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393864, 325), t: 1 }, lastOpVisible: { ts: Timestamp(1547393864, 325), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393863, 443), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393864, 325), $clusterTime: { clusterTime: Timestamp(1547393864, 574), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.588+0000 D SHARDING [conn42] Command end db: config msg id: 115 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.588+0000 I COMMAND [conn42] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 82ms Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.588+0000 D SHARDING [conn42] Command begin db: config msg id: 117 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.588+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b48a1824195fadc1093 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.588+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 163 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.588+0000 D ASIO [conn42] startCommand: RemoteCommand 163 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.588+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.588+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.588+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.588+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.625+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.625+0000 D ASIO [ShardRegistry] Request 163 finished with response: { cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393864, 325), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393864, 325), t: 1 }, lastOpVisible: { ts: Timestamp(1547393864, 325), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393863, 443), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393864, 325), $clusterTime: { clusterTime: Timestamp(1547393864, 761), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.625+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393864, 325), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393864, 325), t: 1 }, lastOpVisible: { ts: Timestamp(1547393864, 325), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393863, 443), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393864, 325), $clusterTime: { clusterTime: Timestamp(1547393864, 761), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.625+0000 D SHARDING [conn42] Command end db: config msg id: 117 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.625+0000 I COMMAND [conn42] query config.databases command: { aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:270 36ms Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.625+0000 D SHARDING [conn42] Command begin db: config msg id: 119 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.626+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 164 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.626+0000 D ASIO [conn42] startCommand: RemoteCommand 164 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.626+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.626+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.626+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.626+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.662+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.662+0000 D ASIO [conn42] Request 164 finished with response: { n: 3, ok: 1.0, operationTime: Timestamp(1547393864, 325), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393864, 325), $clusterTime: { clusterTime: Timestamp(1547393864, 761), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.662+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 3, ok: 1.0, operationTime: Timestamp(1547393864, 325), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393864, 325), $clusterTime: { clusterTime: Timestamp(1547393864, 761), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.663+0000 D SHARDING [conn42] Command end db: config msg id: 119 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.663+0000 I COMMAND [conn42] query config.collections command: { count: "collections", query: { dropped: false }, $db: "config" } numYields:0 reslen:210 37ms Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.664+0000 D SHARDING [conn42] Command begin db: config msg id: 121 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.664+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 165 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393264664) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.664+0000 D ASIO [conn42] startCommand: RemoteCommand 165 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393264664) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.664+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.664+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.664+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.665+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.701+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.701+0000 D ASIO [conn42] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Request 165 finished with response: { cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393857442), up: 3487054, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393855787), up: 3433192, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393854841), up: 3486952, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393858639), up: 797, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393855511), up: 74801, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393862674), up: 74834, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393861787), up: 74807, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393859092), up: 74777, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393855445), up: 74773, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393856128), up: 74745, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.node.gce-us-eas Jan 13 15:37:44 ivy mongos[27723]: t1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393863774), up: 74726, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393858567), up: 74748, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393862535), up: 74724, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393856781), up: 74693, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393860104), up: 74696, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393858202), up: 74640, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393861991), up: 74672, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393856065), up: 74667, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393856928), up: 74638, waiting: true }, { _id: "jacob:27 .......... 5247, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393860490), up: 75211, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393859127), up: 75246, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393860827), up: 76006, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", Jan 13 15:37:44 ivy mongos[27723]: "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393857143), up: 76062, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393856476), up: 76063, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393855266), up: 76001, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393858274), up: 76593, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393857957), up: 76593, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393854579), up: 76530, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393854544), up: 76380, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393854579), up: 76530, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393854545), up: 76317, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393858741), up: 76384, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393854546), up: 76318, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393854545), up: 76192, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393856054), up: 76256, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393856056), Jan 13 15:37:44 ivy mongos[27723]: up: 76257, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393863425), up: 153, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393854446), up: 76131, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393863365), up: 76201, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393864, 325), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393864, 325), $clusterTime: { clusterTime: Timestamp(1547393864, 803), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.701+0000 D EXECUTOR [conn42] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393857442), up: 3487054, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393855787), up: 3433192, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393854841), up: 3486952, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393858639), up: 797, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393855511), up: 74801, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393862674), up: 74834, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393861787), up: 74807, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393859092), up: 74777, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393855445), up: 74773, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393856128), up: 74745, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.no Jan 13 15:37:44 ivy mongos[27723]: de.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393863774), up: 74726, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393858567), up: 74748, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393862535), up: 74724, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393856781), up: 74693, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393860104), up: 74696, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393858202), up: 74640, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393861991), up: 74672, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393856065), up: 74667, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393856928), up: 74638, waiting: true }, { _ .......... 5247, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393860490), up: 75211, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393859127), up: 75246, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393860827), up: 76006, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", Jan 13 15:37:44 ivy mongos[27723]: "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393857143), up: 76062, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393856476), up: 76063, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393855266), up: 76001, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393858274), up: 76593, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393857957), up: 76593, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393854579), up: 76530, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393854544), up: 76380, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393854579), up: 76530, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393854545), up: 76317, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393858741), up: 76384, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393854546), up: 76318, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393854545), up: 76192, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393856054), up: 76256, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393856056), Jan 13 15:37:44 ivy mongos[27723]: up: 76257, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393863425), up: 153, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393854446), up: 76131, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393863365), up: 76201, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393864, 325), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393864, 325), $clusterTime: { clusterTime: Timestamp(1547393864, 803), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.703+0000 D SHARDING [conn42] Command end db: config msg id: 121 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.703+0000 I COMMAND [conn42] query config.mongos command: { find: "mongos", filter: { ping: { $gte: new Date(1547393264664) } }, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:63 reslen:9894 38ms Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.704+0000 D SHARDING [conn42] Command begin db: config msg id: 123 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.705+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 166 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.705+0000 D ASIO [conn42] startCommand: RemoteCommand 166 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.705+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.705+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.705+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.705+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.741+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.741+0000 D ASIO [conn42] Request 166 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393864, 325), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393864, 325), $clusterTime: { clusterTime: Timestamp(1547393864, 812), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.741+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393864, 325), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393864, 325), $clusterTime: { clusterTime: Timestamp(1547393864, 812), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.742+0000 D SHARDING [conn42] Command end db: config msg id: 123 Jan 13 15:37:44 ivy mongos[27723]: 2019-01-13T15:37:44.742+0000 I COMMAND [conn42] query config.locks command: { find: "locks", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:241 37ms Jan 13 15:37:53 ivy mongos[27723]: 2019-01-13T15:37:53.765+0000 D TRACKING [Uptime reporter] Cmd: NotSet, TrackingId: 5c3b5b51a1824195fadc1098 Jan 13 15:37:53 ivy mongos[27723]: 2019-01-13T15:37:53.765+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 167 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:23.765+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393873765), up: 163, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:37:53 ivy mongos[27723]: 2019-01-13T15:37:53.765+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 167 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:23.765+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393873765), up: 163, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:37:53 ivy mongos[27723]: 2019-01-13T15:37:53.765+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:53 ivy mongos[27723]: 2019-01-13T15:37:53.766+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:53 ivy mongos[27723]: 2019-01-13T15:37:53.766+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:53 ivy mongos[27723]: 2019-01-13T15:37:53.766+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:53 ivy mongos[27723]: 2019-01-13T15:37:53.961+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:53 ivy mongos[27723]: 2019-01-13T15:37:53.961+0000 D ASIO [ShardRegistry] Request 167 finished with response: { n: 1, nModified: 1, opTime: { ts: Timestamp(1547393873, 935), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393873, 935), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393873, 935), t: 1 }, lastOpVisible: { ts: Timestamp(1547393873, 935), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393873, 935), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393873, 935), $clusterTime: { clusterTime: Timestamp(1547393873, 1274), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:53 ivy mongos[27723]: 2019-01-13T15:37:53.961+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ n: 1, nModified: 1, opTime: { ts: Timestamp(1547393873, 935), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393873, 935), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393873, 935), t: 1 }, lastOpVisible: { ts: Timestamp(1547393873, 935), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393873, 935), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393873, 935), $clusterTime: { clusterTime: Timestamp(1547393873, 1274), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:53 ivy mongos[27723]: 2019-01-13T15:37:53.961+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:53 ivy mongos[27723]: 2019-01-13T15:37:53.961+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 168 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:38:23.961+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393873, 935), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:53 ivy mongos[27723]: 2019-01-13T15:37:53.961+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 168 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:38:23.961+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393873, 935), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:53 ivy mongos[27723]: 2019-01-13T15:37:53.961+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:53 ivy mongos[27723]: 2019-01-13T15:37:53.962+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:53 ivy mongos[27723]: 2019-01-13T15:37:53.962+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:53 ivy mongos[27723]: 2019-01-13T15:37:53.962+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.001+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.002+0000 D ASIO [ShardRegistry] Request 168 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393873, 935), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393873, 935), t: 1 }, lastOpVisible: { ts: Timestamp(1547393873, 935), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393873, 935), $clusterTime: { clusterTime: Timestamp(1547393873, 1274), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.002+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393873, 935), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393873, 935), t: 1 }, lastOpVisible: { ts: Timestamp(1547393873, 935), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393873, 935), $clusterTime: { clusterTime: Timestamp(1547393873, 1274), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.002+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 169 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:38:24.002+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393873, 935), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.002+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 169 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:38:24.002+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393873, 935), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.002+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.002+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.002+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.002+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.002+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.040+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.040+0000 D ASIO [ShardRegistry] Request 169 finished with response: { cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393873, 935), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393873, 935), t: 1 }, lastOpVisible: { ts: Timestamp(1547393873, 935), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393873, 935), $clusterTime: { clusterTime: Timestamp(1547393873, 1410), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.040+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393873, 935), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393873, 935), t: 1 }, lastOpVisible: { ts: Timestamp(1547393873, 935), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393873, 935), $clusterTime: { clusterTime: Timestamp(1547393873, 1410), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.040+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 170 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:38:24.040+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393873, 935), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.040+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 170 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:38:24.040+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393873, 935), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.040+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.041+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.041+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.041+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.041+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.079+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.079+0000 D ASIO [ShardRegistry] Request 170 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393873, 1364), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393873, 935), t: 1 }, lastOpVisible: { ts: Timestamp(1547393873, 935), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393873, 935), $clusterTime: { clusterTime: Timestamp(1547393873, 1410), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.079+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393873, 1364), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393873, 935), t: 1 }, lastOpVisible: { ts: Timestamp(1547393873, 935), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393873, 935), $clusterTime: { clusterTime: Timestamp(1547393873, 1410), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:54 ivy mongos[27723]: 2019-01-13T15:37:54.079+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:58 ivy mongos[27723]: 2019-01-13T15:37:58.715+0000 D SHARDING [conn42] Command begin db: admin msg id: 125 Jan 13 15:37:58 ivy mongos[27723]: 2019-01-13T15:37:58.715+0000 D SHARDING [conn42] Command end db: admin msg id: 125 Jan 13 15:37:58 ivy mongos[27723]: 2019-01-13T15:37:58.715+0000 I COMMAND [conn42] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:178 protocol:op_query 0ms Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.223+0000 D SHARDING [conn42] Command begin db: admin msg id: 127 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.223+0000 D SHARDING [conn42] Command end db: admin msg id: 127 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.223+0000 I COMMAND [conn42] query admin.1 command: { buildInfo: "1", $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:1340 0ms Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.225+0000 D SHARDING [conn42] Command begin db: admin msg id: 129 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.226+0000 D NETWORK [conn42] Starting server-side compression negotiation Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.226+0000 D NETWORK [conn42] Compression negotiation not requested by client Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.226+0000 D SHARDING [conn42] Command end db: admin msg id: 129 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.226+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.229+0000 D SHARDING [conn42] Command begin db: admin msg id: 131 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.229+0000 D SHARDING [conn42] Command end db: admin msg id: 131 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.229+0000 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $db: "admin" } numYields:0 reslen:10255 protocol:op_query 0ms Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.231+0000 D SHARDING [conn42] Command begin db: config msg id: 133 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.232+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 171 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.232+0000 D ASIO [conn42] startCommand: RemoteCommand 171 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.232+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.232+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.232+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.232+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.270+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.270+0000 D ASIO [conn42] Request 171 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393879, 214), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393879, 1), $clusterTime: { clusterTime: Timestamp(1547393879, 240), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.270+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393879, 214), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393879, 1), $clusterTime: { clusterTime: Timestamp(1547393879, 240), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.270+0000 D SHARDING [conn42] Command end db: config msg id: 133 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.271+0000 I COMMAND [conn42] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 39ms Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.271+0000 D SHARDING [conn42] Command begin db: config msg id: 135 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.271+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b57a1824195fadc10a2 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.271+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 172 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.271+0000 D ASIO [conn42] startCommand: RemoteCommand 172 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.271+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.271+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.271+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.271+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.339+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.339+0000 D ASIO [ShardRegistry] Request 172 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393879, 214), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393879, 1), t: 1 }, lastOpVisible: { ts: Timestamp(1547393879, 1), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393873, 935), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393879, 1), $clusterTime: { clusterTime: Timestamp(1547393879, 403), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.339+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393879, 214), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393879, 1), t: 1 }, lastOpVisible: { ts: Timestamp(1547393879, 1), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393873, 935), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393879, 1), $clusterTime: { clusterTime: Timestamp(1547393879, 403), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.340+0000 D SHARDING [conn42] Command end db: config msg id: 135 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.340+0000 I COMMAND [conn42] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 68ms Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.340+0000 D SHARDING [conn42] Command begin db: config msg id: 137 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.340+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 173 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.340+0000 D ASIO [conn42] startCommand: RemoteCommand 173 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.340+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.340+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.340+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.340+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.377+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.377+0000 D ASIO [conn42] Request 173 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393879, 214), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393879, 125), $clusterTime: { clusterTime: Timestamp(1547393879, 450), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.377+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393879, 214), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393879, 125), $clusterTime: { clusterTime: Timestamp(1547393879, 450), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.377+0000 D SHARDING [conn42] Command end db: config msg id: 137 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.377+0000 I COMMAND [conn42] query config.settings command: { find: "settings", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:315 37ms Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.377+0000 D SHARDING [conn42] Command begin db: config msg id: 139 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.377+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b57a1824195fadc10a5 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.378+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 174 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393279377) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.378+0000 D ASIO [conn42] startCommand: RemoteCommand 174 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393279377) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.378+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.378+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.378+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.378+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.443+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.443+0000 D ASIO [ShardRegistry] Request 174 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393879, 451), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393879, 214), t: 1 }, lastOpVisible: { ts: Timestamp(1547393879, 214), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393873, 935), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393879, 214), $clusterTime: { clusterTime: Timestamp(1547393879, 527), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.443+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393879, 451), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393879, 214), t: 1 }, lastOpVisible: { ts: Timestamp(1547393879, 214), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393873, 935), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393879, 214), $clusterTime: { clusterTime: Timestamp(1547393879, 527), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.444+0000 D SHARDING [conn42] Command end db: config msg id: 139 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.444+0000 I COMMAND [conn42] query config.changelog command: { aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393279377) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:245 66ms Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.444+0000 D SHARDING [conn42] Command begin db: config msg id: 141 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.444+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 175 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.444+0000 D ASIO [conn42] startCommand: RemoteCommand 175 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.444+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.444+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.444+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.444+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.481+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.481+0000 D ASIO [conn42] Request 175 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393879, 451), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393879, 214), $clusterTime: { clusterTime: Timestamp(1547393879, 527), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.481+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393879, 451), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393879, 214), $clusterTime: { clusterTime: Timestamp(1547393879, 527), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.481+0000 D SHARDING [conn42] Command end db: config msg id: 141 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.481+0000 I COMMAND [conn42] query config.shards command: { find: "shards", filter: {}, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:1834 37ms Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.482+0000 D SHARDING [conn42] Command begin db: config msg id: 143 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.482+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 176 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.482+0000 D ASIO [conn42] startCommand: RemoteCommand 176 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.482+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.482+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.482+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.482+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.519+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.519+0000 D ASIO [conn42] Request 176 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393879, 451), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393879, 214), $clusterTime: { clusterTime: Timestamp(1547393879, 527), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.519+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393879, 451), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393879, 214), $clusterTime: { clusterTime: Timestamp(1547393879, 527), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.519+0000 D SHARDING [conn42] Command end db: config msg id: 143 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.519+0000 I COMMAND [conn42] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 37ms Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.520+0000 D SHARDING [conn42] Command begin db: config msg id: 145 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.520+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b57a1824195fadc10a9 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.520+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 177 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.520+0000 D ASIO [conn42] startCommand: RemoteCommand 177 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.520+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.520+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.520+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.520+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.622+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.622+0000 D ASIO [ShardRegistry] Request 177 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393879, 606), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393879, 451), t: 1 }, lastOpVisible: { ts: Timestamp(1547393879, 451), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393873, 935), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393879, 451), $clusterTime: { clusterTime: Timestamp(1547393879, 730), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.622+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393879, 606), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393879, 451), t: 1 }, lastOpVisible: { ts: Timestamp(1547393879, 451), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393873, 935), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393879, 451), $clusterTime: { clusterTime: Timestamp(1547393879, 730), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.622+0000 D SHARDING [conn42] Command end db: config msg id: 145 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.622+0000 I COMMAND [conn42] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 102ms Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.623+0000 D SHARDING [conn42] Command begin db: config msg id: 147 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.623+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b57a1824195fadc10ab Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.623+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 178 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { total: { $sum: 1 }, _id: "$partitioned" } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.623+0000 D ASIO [conn42] startCommand: RemoteCommand 178 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { total: { $sum: 1 }, _id: "$partitioned" } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.623+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.623+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.623+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.623+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.660+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.660+0000 D ASIO [ShardRegistry] Request 178 finished with response: { cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393879, 763), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393879, 606), t: 1 }, lastOpVisible: { ts: Timestamp(1547393879, 606), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393873, 935), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393879, 606), $clusterTime: { clusterTime: Timestamp(1547393879, 763), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.660+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393879, 763), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393879, 606), t: 1 }, lastOpVisible: { ts: Timestamp(1547393879, 606), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393873, 935), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393879, 606), $clusterTime: { clusterTime: Timestamp(1547393879, 763), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.661+0000 D SHARDING [conn42] Command end db: config msg id: 147 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.661+0000 I COMMAND [conn42] query config.databases command: { aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { total: { $sum: 1 }, _id: "$partitioned" } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:270 37ms Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.662+0000 D SHARDING [conn42] Command begin db: config msg id: 149 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.662+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 179 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.662+0000 D ASIO [conn42] startCommand: RemoteCommand 179 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.662+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.662+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.662+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.662+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.699+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.699+0000 D ASIO [conn42] Request 179 finished with response: { n: 3, ok: 1.0, operationTime: Timestamp(1547393879, 763), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393879, 606), $clusterTime: { clusterTime: Timestamp(1547393879, 908), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.699+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 3, ok: 1.0, operationTime: Timestamp(1547393879, 763), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393879, 606), $clusterTime: { clusterTime: Timestamp(1547393879, 908), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.700+0000 D SHARDING [conn42] Command end db: config msg id: 149 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.700+0000 I COMMAND [conn42] query config.collections command: { count: "collections", query: { dropped: false }, $db: "config" } numYields:0 reslen:210 38ms Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.702+0000 D SHARDING [conn42] Command begin db: config msg id: 151 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.702+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 180 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393279701) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.702+0000 D ASIO [conn42] startCommand: RemoteCommand 180 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393279701) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.702+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.702+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.702+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.702+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.739+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.739+0000 D ASIO [conn42] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Request 180 finished with response: { cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393878265), up: 3487075, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393876346), up: 3433212, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393875478), up: 3486973, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393879069), up: 818, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393875903), up: 74822, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393872858), up: 74844, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393872062), up: 74817, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393879485), up: 74797, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393875904), up: 74794, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393876532), up: 74766, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.node.gce-us-eas Jan 13 15:37:59 ivy mongos[27723]: t1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393873939), up: 74736, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393879010), up: 74768, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393872788), up: 74735, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393877233), up: 74713, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393870352), up: 74706, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393878702), up: 74660, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393872194), up: 74683, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393876468), up: 74687, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393877293), up: 74658, waiting: true }, { _id: "jacob:27 .......... 5257, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393870702), up: 75221, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393879587), up: 75267, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393871195), up: 76017, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", Jan 13 15:37:59 ivy mongos[27723]: "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393877756), up: 76083, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393877126), up: 76083, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393876514), up: 76022, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393878878), up: 76614, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393878679), up: 76614, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393875312), up: 76550, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393875278), up: 76401, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393875314), up: 76550, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393875281), up: 76338, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393879346), up: 76405, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393875282), up: 76338, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393875279), up: 76213, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393876905), up: 76277, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393876908), Jan 13 15:37:59 ivy mongos[27723]: up: 76278, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393873765), up: 163, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393875280), up: 76151, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393873714), up: 76211, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393879, 763), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393879, 606), $clusterTime: { clusterTime: Timestamp(1547393879, 908), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.739+0000 D EXECUTOR [conn42] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393878265), up: 3487075, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393876346), up: 3433212, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393875478), up: 3486973, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393879069), up: 818, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393875903), up: 74822, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393872858), up: 74844, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393872062), up: 74817, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393879485), up: 74797, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393875904), up: 74794, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393876532), up: 74766, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.no Jan 13 15:37:59 ivy mongos[27723]: de.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393873939), up: 74736, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393879010), up: 74768, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393872788), up: 74735, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393877233), up: 74713, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393870352), up: 74706, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393878702), up: 74660, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393872194), up: 74683, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393876468), up: 74687, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393877293), up: 74658, waiting: true }, { _ .......... 5257, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393870702), up: 75221, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393879587), up: 75267, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393871195), up: 76017, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", Jan 13 15:37:59 ivy mongos[27723]: "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393877756), up: 76083, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393877126), up: 76083, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393876514), up: 76022, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393878878), up: 76614, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393878679), up: 76614, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393875312), up: 76550, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393875278), up: 76401, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393875314), up: 76550, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393875281), up: 76338, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393879346), up: 76405, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393875282), up: 76338, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393875279), up: 76213, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393876905), up: 76277, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393876908), Jan 13 15:37:59 ivy mongos[27723]: up: 76278, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393873765), up: 163, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393875280), up: 76151, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393873714), up: 76211, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393879, 763), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393879, 606), $clusterTime: { clusterTime: Timestamp(1547393879, 908), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.740+0000 D SHARDING [conn42] Command end db: config msg id: 151 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.740+0000 I COMMAND [conn42] query config.mongos command: { find: "mongos", filter: { ping: { $gte: new Date(1547393279701) } }, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:63 reslen:9894 38ms Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.743+0000 D SHARDING [conn42] Command begin db: config msg id: 153 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.743+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 181 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.743+0000 D ASIO [conn42] startCommand: RemoteCommand 181 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.743+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.743+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.743+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.743+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.780+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.780+0000 D ASIO [conn42] Request 181 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393879, 763), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393879, 606), $clusterTime: { clusterTime: Timestamp(1547393879, 976), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.780+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393879, 763), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393879, 606), $clusterTime: { clusterTime: Timestamp(1547393879, 976), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.780+0000 D SHARDING [conn42] Command end db: config msg id: 153 Jan 13 15:37:59 ivy mongos[27723]: 2019-01-13T15:37:59.780+0000 I COMMAND [conn42] query config.locks command: { find: "locks", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:241 37ms Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.519+0000 D TRACKING [UserCacheInvalidator] Cmd: NotSet, TrackingId: 5c3b5b59a1824195fadc10b0 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.519+0000 D EXECUTOR [UserCacheInvalidator] Scheduling remote command request: RemoteCommand 182 -- target:ira.node.gce-us-east1.admiral:27019 db:admin expDate:2019-01-13T15:38:31.519+0000 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.519+0000 D ASIO [UserCacheInvalidator] startCommand: RemoteCommand 182 -- target:ira.node.gce-us-east1.admiral:27019 db:admin expDate:2019-01-13T15:38:31.519+0000 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.519+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.519+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.519+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.519+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.539+0000 D SHARDING [conn42] Command begin db: admin msg id: 155 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.540+0000 D NETWORK [conn42] Starting server-side compression negotiation Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.540+0000 D NETWORK [conn42] Compression negotiation not requested by client Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.540+0000 D SHARDING [conn42] Command end db: admin msg id: 155 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.540+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.555+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.556+0000 D ASIO [ShardRegistry] Request 182 finished with response: { cacheGeneration: ObjectId('5c002e8aad899acfb0bbfd1e'), ok: 1.0, operationTime: Timestamp(1547393881, 586), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393881, 392), t: 1 }, lastOpVisible: { ts: Timestamp(1547393881, 392), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393873, 935), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393881, 392), $clusterTime: { clusterTime: Timestamp(1547393881, 627), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.556+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cacheGeneration: ObjectId('5c002e8aad899acfb0bbfd1e'), ok: 1.0, operationTime: Timestamp(1547393881, 586), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393881, 392), t: 1 }, lastOpVisible: { ts: Timestamp(1547393881, 392), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393873, 935), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393881, 392), $clusterTime: { clusterTime: Timestamp(1547393881, 627), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.556+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.597+0000 D NETWORK [conn34] Decompressing message with snappy Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.597+0000 D ASIO [conn34] Request 32 finished with response: { queryPlanner: { plannerVersion: 1, namespace: "visitor_api.sessions4", indexFilterSet: false, parsedQuery: { $and: [ { r: { $eq: "gce-us-east1" } }, { u: { $lt: "V" } } ] }, winningPlan: { stage: "LIMIT", limitAmount: 2, inputStage: { stage: "SHARDING_FILTER", inputStage: { stage: "FETCH", filter: { u: { $lt: "V" } }, inputStage: { stage: "IXSCAN", keyPattern: { r: 1.0, e: 1.0, ss: 1.0, tsc: 1.0, tslp: 1.0 }, indexName: "r_1_e_1_ss_1_tsc_1_tslp_1", isMultiKey: false, multiKeyPaths: { r: [], e: [], ss: [], tsc: [], tslp: [] }, isUnique: false, isSparse: true, isPartial: false, indexVersion: 2, direction: "forward", indexBounds: { r: [ "["gce-us-east1", "gce-us-east1"]" ], e: [ "[MinKey, MaxKey]" ], ss: [ "[MinKey, MaxKey]" ], tsc: [ "[MinKey, MaxKey]" ], tslp: [ "[MinKey, MaxKey]" ] } } } } }, rejectedPlans: [ { stage: "LIMIT", limitAmount: 2, inputStage: { stage: "FETCH", inputStage: { stage: "SHARDING_FILTER", inputStage: { stage: "IXSCAN", keyPattern: { r: 1.0, ss: 1.0, tsc: 1.0, tslp: 1.0, u: 1.0 }, indexName: "r_1_ss_1_tsc_1_tslp_1_u_1", isMultiKey: false, multiKeyPaths: { r: [], ss: [], tsc: [], tslp: [], u: [] }, isUnique: false, isSparse: false, isPartial: false, indexVersion: 2, direction: "forward", indexBounds: { r: [ "["gce-us-east1", "gce-us-east1"]" ], ss: [ "[MinKey, MaxKey]" ], tsc: [ "[MinKey, MaxKey]" ], tslp: [ "[MinKey, MaxKey]" ], u: [ "["", "V")" ] } } } } }, { stage: "LIMIT", limitAmount: 2, inputStage: { stage: "FETCH", inputStage: { stage: "SHARDING_FILTER", inputStage: { stage: "IXSCAN", keyPattern: { r: 1, u: 1, pid: 1, oid: 1, incr: 1 }, indexName: "r_1_u_1_pid_1_oid_1_incr_1", isMultiKey: false, multiKeyPaths: { r: [], u: [], pid: [], oid: [], incr: [] }, isUnique: true, isSparse: false, isPartial: false, indexVersion: 2, direction: "forward", indexBounds: { r: [ "["gce-us-east1", "gce-us-east1"]" ], u: [ "["", "V")" ], pid: [ "[MinKey, MaxKey]" ], oid: [ "[MinKey, MaxKey]" ], incr: [ "[MinKey, MaxKey Jan 13 15:38:01 ivy mongos[27723]: ]" ] } } } } } ] }, serverInfo: { host: "phil.11-e.ninja", port: 27017, version: "4.0.5", gitVersion: "3739429dd92b92d1b0ab120911a23d50bf03c412" }, ok: 1.0, operationTime: Timestamp(1547393881, 664), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000016') }, lastCommittedOpTime: Timestamp(1547393881, 583), $configServerState: { opTime: { ts: Timestamp(1547393881, 392), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393881, 664), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.598+0000 D EXECUTOR [conn34] Received remote response: RemoteResponse -- cmd:{ queryPlanner: { plannerVersion: 1, namespace: "visitor_api.sessions4", indexFilterSet: false, parsedQuery: { $and: [ { r: { $eq: "gce-us-east1" } }, { u: { $lt: "V" } } ] }, winningPlan: { stage: "LIMIT", limitAmount: 2, inputStage: { stage: "SHARDING_FILTER", inputStage: { stage: "FETCH", filter: { u: { $lt: "V" } }, inputStage: { stage: "IXSCAN", keyPattern: { r: 1.0, e: 1.0, ss: 1.0, tsc: 1.0, tslp: 1.0 }, indexName: "r_1_e_1_ss_1_tsc_1_tslp_1", isMultiKey: false, multiKeyPaths: { r: [], e: [], ss: [], tsc: [], tslp: [] }, isUnique: false, isSparse: true, isPartial: false, indexVersion: 2, direction: "forward", indexBounds: { r: [ "["gce-us-east1", "gce-us-east1"]" ], e: [ "[MinKey, MaxKey]" ], ss: [ "[MinKey, MaxKey]" ], tsc: [ "[MinKey, MaxKey]" ], tslp: [ "[MinKey, MaxKey]" ] } } } } }, rejectedPlans: [ { stage: "LIMIT", limitAmount: 2, inputStage: { stage: "FETCH", inputStage: { stage: "SHARDING_FILTER", inputStage: { stage: "IXSCAN", keyPattern: { r: 1.0, ss: 1.0, tsc: 1.0, tslp: 1.0, u: 1.0 }, indexName: "r_1_ss_1_tsc_1_tslp_1_u_1", isMultiKey: false, multiKeyPaths: { r: [], ss: [], tsc: [], tslp: [], u: [] }, isUnique: false, isSparse: false, isPartial: false, indexVersion: 2, direction: "forward", indexBounds: { r: [ "["gce-us-east1", "gce-us-east1"]" ], ss: [ "[MinKey, MaxKey]" ], tsc: [ "[MinKey, MaxKey]" ], tslp: [ "[MinKey, MaxKey]" ], u: [ "["", "V")" ] } } } } }, { stage: "LIMIT", limitAmount: 2, inputStage: { stage: "FETCH", inputStage: { stage: "SHARDING_FILTER", inputStage: { stage: "IXSCAN", keyPattern: { r: 1, u: 1, pid: 1, oid: 1, incr: 1 }, indexName: "r_1_u_1_pid_1_oid_1_incr_1", isMultiKey: false, multiKeyPaths: { r: [], u: [], pid: [], oid: [], incr: [] }, isUnique: true, isSparse: false, isPartial: false, indexVersion: 2, direction: "forward", indexBounds: { r: [ "["gce-us-east1", "gce-us-east1"]" ], u: [ "["", "V")" ], pid: [ "[MinKey, MaxKey]" ], oid: [ "[MinKey, MaxKey]" ], incr: [ "[ Jan 13 15:38:01 ivy mongos[27723]: MinKey, MaxKey]" ] } } } } } ] }, serverInfo: { host: "phil.11-e.ninja", port: 27017, version: "4.0.5", gitVersion: "3739429dd92b92d1b0ab120911a23d50bf03c412" }, ok: 1.0, operationTime: Timestamp(1547393881, 664), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000016') }, lastCommittedOpTime: Timestamp(1547393881, 583), $configServerState: { opTime: { ts: Timestamp(1547393881, 392), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393881, 664), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.598+0000 D SHARDING [conn34] Command end db: visitor_api msg id: 15 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.598+0000 I COMMAND [conn34] command visitor_api.$cmd appName: "MongoDB Shell" command: explain { explain: { find: "sessions4", filter: { r: "gce-us-east1", u: { $lt: "V" } }, limit: 2.0, singleBatch: false }, verbosity: "queryPlanner", lsid: { id: UUID("8b64ac7e-d8e7-4248-bd43-3e20300b615e") }, $clusterTime: { clusterTime: Timestamp(1547393714, 1109), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "visitor_api" } numYields:0 reslen:5092 protocol:op_msg 163064ms Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.606+0000 D SHARDING [shard registry reload] Reloading shardRegistry Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.606+0000 D TRACKING [shard registry reload] Cmd: NotSet, TrackingId: 5c3b5b59a1824195fadc10b3 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.606+0000 D EXECUTOR [shard registry reload] Scheduling remote command request: RemoteCommand 183 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:38:31.606+0000 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393881, 392), t: 1 } }, maxTimeMS: 30000 } Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.606+0000 D ASIO [shard registry reload] startCommand: RemoteCommand 183 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:38:31.606+0000 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393881, 392), t: 1 } }, maxTimeMS: 30000 } Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.606+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.606+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.606+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.606+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.613+0000 D SHARDING [conn34] Command begin db: admin msg id: 16 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.613+0000 D SHARDING [conn34] Command end db: admin msg id: 16 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.613+0000 I COMMAND [conn34] command admin.$cmd appName: "MongoDB Shell" command: replSetGetStatus { replSetGetStatus: 1.0, forShell: 1.0, $clusterTime: { clusterTime: Timestamp(1547393881, 664), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:241 protocol:op_msg 0ms Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.648+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.648+0000 D ASIO [ShardRegistry] Request 183 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393881, 667), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393881, 392), t: 1 }, lastOpVisible: { ts: Timestamp(1547393881, 392), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, l Jan 13 15:38:01 ivy mongos[27723]: astCommittedOpTime: Timestamp(1547393881, 392), $clusterTime: { clusterTime: Timestamp(1547393881, 667), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.648+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393881, 667), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393881, 392), t: 1 }, lastOpVisible: { ts: Timestamp(1547393881, 392), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000 Jan 13 15:38:01 ivy mongos[27723]: 000000') }, lastCommittedOpTime: Timestamp(1547393881, 392), $clusterTime: { clusterTime: Timestamp(1547393881, 667), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.648+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.648+0000 D SHARDING [shard registry reload] found 7 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1547393881, 392), t: 1 } Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.648+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.648+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_east1, with CS sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.648+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.648+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_central1, with CS sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.649+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.649+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_west1, with CS sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.649+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.649+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_europe_west1, with CS sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.649+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.649+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_europe_west2, with CS sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.649+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.649+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_europe_west3, with CS sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.649+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.649+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_east1_2, with CS sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.649+0000 D SHARDING [shard registry reload] Adding shard config, with CS sessions_config/ira.node.gce-us-east1.admiral:27019,jasper.node.gce-us-west1.admiral:27019,kratos.node.gce-europe-west3.admiral:27019,leon.node.gce-us-east1.admiral:27019,mateo.node.gce-us-west1.admiral:27019,newton.node.gce-europe-west3.admiral:27019 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.728+0000 I ASIO [ShardRegistry] Ending idle connection to host ira.node.gce-us-east1.admiral:27019 because the pool meets constraints; 3 connections to that host remain open Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.728+0000 D NETWORK [ShardRegistry] Cancelling outstanding I/O operations on connection to 10.142.15.204:27019 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.786+0000 I ASIO [ShardRegistry] Ending idle connection to host ira.node.gce-us-east1.admiral:27019 because the pool meets constraints; 2 connections to that host remain open Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.786+0000 D NETWORK [ShardRegistry] Cancelling outstanding I/O operations on connection to 10.142.15.204:27019 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.878+0000 I ASIO [ShardRegistry] Ending idle connection to host ira.node.gce-us-east1.admiral:27019 because the pool meets constraints; 1 connections to that host remain open Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.878+0000 D NETWORK [ShardRegistry] Cancelling outstanding I/O operations on connection to 10.142.15.204:27019 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.943+0000 D TRACKING [replSetDistLockPinger] Cmd: NotSet, TrackingId: 5c3b5b59a1824195fadc10b6 Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.943+0000 D EXECUTOR [replSetDistLockPinger] Scheduling remote command request: RemoteCommand 184 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:31.943+0000 cmd:{ findAndModify: "lockpings", query: { _id: "ivy:27018:1547393707:-6945163188777852108" }, update: { $set: { ping: new Date(1547393881943) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.943+0000 D ASIO [replSetDistLockPinger] startCommand: RemoteCommand 184 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:31.943+0000 cmd:{ findAndModify: "lockpings", query: { _id: "ivy:27018:1547393707:-6945163188777852108" }, update: { $set: { ping: new Date(1547393881943) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.943+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.943+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.943+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:01 ivy mongos[27723]: 2019-01-13T15:38:01.943+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.157+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.158+0000 D ASIO [ShardRegistry] Request 184 finished with response: { lastErrorObject: { n: 1, updatedExisting: true }, value: { _id: "ivy:27018:1547393707:-6945163188777852108", ping: new Date(1547393851714) }, ok: 1.0, operationTime: Timestamp(1547393881, 1109), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393881, 1109), t: 1 }, lastOpVisible: { ts: Timestamp(1547393881, 1109), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393881, 1109), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393881, 1109), $clusterTime: { clusterTime: Timestamp(1547393882, 67), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.158+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ lastErrorObject: { n: 1, updatedExisting: true }, value: { _id: "ivy:27018:1547393707:-6945163188777852108", ping: new Date(1547393851714) }, ok: 1.0, operationTime: Timestamp(1547393881, 1109), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393881, 1109), t: 1 }, lastOpVisible: { ts: Timestamp(1547393881, 1109), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393881, 1109), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393881, 1109), $clusterTime: { clusterTime: Timestamp(1547393882, 67), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.158+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.265+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_config Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.265+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.301+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.301+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host ira.node.gce-us-east1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: true, secondary: false, primary: "ira.node.gce-us-east1.admiral:27019", me: "ira.node.gce-us-east1.admiral:27019", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1547393881, 1109), t: 1 }, lastWriteDate: new Date(1547393881000), majorityOpTime: { ts: Timestamp(1547393881, 1109), t: 1 }, majorityWriteDate: new Date(1547393881000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393882281), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393881, 1109), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393881, 1109), $clusterTime: { clusterTime: Timestamp(1547393882, 105), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.301+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ira.node.gce-us-east1.admiral:27019 lastWriteDate to 2019-01-13T15:38:01.000+0000 Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.301+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ira.node.gce-us-east1.admiral:27019 opTime to { ts: Timestamp(1547393881, 1109), t: 1 } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.302+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.340+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.340+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host jasper.node.gce-us-west1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "jasper.node.gce-us-west1.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393881, 1109), t: 1 }, lastWriteDate: new Date(1547393881000), majorityOpTime: { ts: Timestamp(1547393881, 1109), t: 1 }, majorityWriteDate: new Date(1547393881000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393882318), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393881, 1109), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393881, 1109), $clusterTime: { clusterTime: Timestamp(1547393882, 211), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.340+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jasper.node.gce-us-west1.admiral:27019 lastWriteDate to 2019-01-13T15:38:01.000+0000 Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.340+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jasper.node.gce-us-west1.admiral:27019 opTime to { ts: Timestamp(1547393881, 1109), t: 1 } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.340+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.377+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.446+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.447+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host kratos.node.gce-europe-west3.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "kratos.node.gce-europe-west3.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393881, 1109), t: 1 }, lastWriteDate: new Date(1547393881000), majorityOpTime: { ts: Timestamp(1547393881, 1109), t: 1 }, majorityWriteDate: new Date(1547393881000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393882388), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393881, 1109), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393881, 1109), $clusterTime: { clusterTime: Timestamp(1547393882, 96), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.447+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating kratos.node.gce-europe-west3.admiral:27019 lastWriteDate to 2019-01-13T15:38:01.000+0000 Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.447+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating kratos.node.gce-europe-west3.admiral:27019 opTime to { ts: Timestamp(1547393881, 1109), t: 1 } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.447+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.485+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.485+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.486+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.486+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host mateo.node.gce-us-west1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "mateo.node.gce-us-west1.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393882, 458), t: 1 }, lastWriteDate: new Date(1547393882000), majorityOpTime: { ts: Timestamp(1547393881, 1109), t: 1 }, majorityWriteDate: new Date(1547393881000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393882462), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393882, 458), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393881, 1109), $clusterTime: { clusterTime: Timestamp(1547393882, 458), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.486+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating mateo.node.gce-us-west1.admiral:27019 lastWriteDate to 2019-01-13T15:38:02.000+0000 Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.486+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating mateo.node.gce-us-west1.admiral:27019 opTime to { ts: Timestamp(1547393882, 458), t: 1 } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.486+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.592+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.592+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host newton.node.gce-europe-west3.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "newton.node.gce-europe-west3.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393882, 458), t: 1 }, lastWriteDate: new Date(1547393882000), majorityOpTime: { ts: Timestamp(1547393881, 1109), t: 1 }, majorityWriteDate: new Date(1547393881000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393882535), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393882, 458), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393881, 1109), $clusterTime: { clusterTime: Timestamp(1547393882, 459), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.592+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating newton.node.gce-europe-west3.admiral:27019 lastWriteDate to 2019-01-13T15:38:02.000+0000 Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.592+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating newton.node.gce-europe-west3.admiral:27019 opTime to { ts: Timestamp(1547393882, 458), t: 1 } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.592+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.630+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.630+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host leon.node.gce-us-east1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "leon.node.gce-us-east1.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393882, 584), t: 1 }, lastWriteDate: new Date(1547393882000), majorityOpTime: { ts: Timestamp(1547393882, 458), t: 1 }, majorityWriteDate: new Date(1547393882000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393882606), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393882, 584), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393882, 458), $clusterTime: { clusterTime: Timestamp(1547393882, 584), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.630+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating leon.node.gce-us-east1.admiral:27019 lastWriteDate to 2019-01-13T15:38:02.000+0000 Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.630+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating leon.node.gce-us-east1.admiral:27019 opTime to { ts: Timestamp(1547393882, 584), t: 1 } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.630+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_config took 364 msec Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.630+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_east1 Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.630+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.666+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.666+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host phil.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "zeta.node.gce-us-east1.admiral:27017", "phil.node.gce-us-east1.admiral:27017", "bambi.node.gce-us-central1.admiral:27017" ], arbiters: [ "elrond.node.gce-us-west1.admiral:27017", "dale.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1", setVersion: 19, ismaster: true, secondary: false, primary: "phil.node.gce-us-east1.admiral:27017", me: "phil.node.gce-us-east1.admiral:27017", electionId: ObjectId('7fffffff0000000000000016'), lastWrite: { opTime: { ts: Timestamp(1547393882, 686), t: 22 }, lastWriteDate: new Date(1547393882000), majorityOpTime: { ts: Timestamp(1547393882, 589), t: 22 }, majorityWriteDate: new Date(1547393882000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393882643), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393882, 686), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000016') }, lastCommittedOpTime: Timestamp(1547393882, 589), $configServerState: { opTime: { ts: Timestamp(1547393882, 458), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393882, 686), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.666+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating phil.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:38:02.000+0000 Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.666+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating phil.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393882, 686), t: 22 } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.666+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.703+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.704+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host zeta.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "zeta.node.gce-us-east1.admiral:27017", "phil.node.gce-us-east1.admiral:27017", "bambi.node.gce-us-central1.admiral:27017" ], arbiters: [ "elrond.node.gce-us-west1.admiral:27017", "dale.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1", setVersion: 19, ismaster: false, secondary: true, primary: "phil.node.gce-us-east1.admiral:27017", me: "zeta.node.gce-us-east1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393882, 699), t: 22 }, lastWriteDate: new Date(1547393882000), majorityOpTime: { ts: Timestamp(1547393882, 628), t: 22 }, majorityWriteDate: new Date(1547393882000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393882680), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393882, 699), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393882, 628), $configServerState: { opTime: { ts: Timestamp(1547393878, 1318), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393882, 733), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.704+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating zeta.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:38:02.000+0000 Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.704+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating zeta.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393882, 699), t: 22 } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.704+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.704+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.704+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host bambi.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "zeta.node.gce-us-east1.admiral:27017", "phil.node.gce-us-east1.admiral:27017", "bambi.node.gce-us-central1.admiral:27017" ], arbiters: [ "elrond.node.gce-us-west1.admiral:27017", "dale.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1", setVersion: 19, ismaster: false, secondary: true, primary: "phil.node.gce-us-east1.admiral:27017", me: "bambi.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393882, 674), t: 22 }, lastWriteDate: new Date(1547393882000), majorityOpTime: { ts: Timestamp(1547393882, 589), t: 22 }, majorityWriteDate: new Date(1547393882000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393882700), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393882, 674), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393882, 589), $configServerState: { opTime: { ts: Timestamp(1547393866, 812), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393882, 731), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.704+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating bambi.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:38:02.000+0000 Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.704+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating bambi.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393882, 674), t: 22 } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.704+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_east1 took 74 msec Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.704+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_central1 Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.704+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.706+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.706+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host camden.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "percy.node.gce-us-central1.admiral:27017", "umbra.node.gce-us-west1.admiral:27017", "camden.node.gce-us-central1.admiral:27017" ], arbiters: [ "desmond.node.gce-us-east1.admiral:27017", "flint.node.gce-us-west1.admiral:27017" ], setName: "sessions_gce_us_central1", setVersion: 6, ismaster: true, secondary: false, primary: "camden.node.gce-us-central1.admiral:27017", me: "camden.node.gce-us-central1.admiral:27017", electionId: ObjectId('7fffffff0000000000000004'), lastWrite: { opTime: { ts: Timestamp(1547393882, 737), t: 4 }, lastWriteDate: new Date(1547393882000), majorityOpTime: { ts: Timestamp(1547393882, 639), t: 4 }, majorityWriteDate: new Date(1547393882000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393882701), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393882, 737), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000004') }, lastCommittedOpTime: Timestamp(1547393882, 639), $configServerState: { opTime: { ts: Timestamp(1547393882, 505), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393882, 737), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.706+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating camden.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:38:02.000+0000 Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.706+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating camden.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393882, 737), t: 4 } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.706+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.746+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.746+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host umbra.node.gce-us-west1.admiral:27017 based on ismaster reply: { hosts: [ "percy.node.gce-us-central1.admiral:27017", "umbra.node.gce-us-west1.admiral:27017", "camden.node.gce-us-central1.admiral:27017" ], arbiters: [ "desmond.node.gce-us-east1.admiral:27017", "flint.node.gce-us-west1.admiral:27017" ], setName: "sessions_gce_us_central1", setVersion: 6, ismaster: false, secondary: true, primary: "camden.node.gce-us-central1.admiral:27017", me: "umbra.node.gce-us-west1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393882, 702), t: 4 }, lastWriteDate: new Date(1547393882000), majorityOpTime: { ts: Timestamp(1547393882, 639), t: 4 }, majorityWriteDate: new Date(1547393882000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393882721), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393882, 702), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393882, 639), $configServerState: { opTime: { ts: Timestamp(1547393859, 120), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393882, 748), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.746+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating umbra.node.gce-us-west1.admiral:27017 lastWriteDate to 2019-01-13T15:38:02.000+0000 Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.746+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating umbra.node.gce-us-west1.admiral:27017 opTime to { ts: Timestamp(1547393882, 702), t: 4 } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.746+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.748+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.748+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host percy.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "percy.node.gce-us-central1.admiral:27017", "umbra.node.gce-us-west1.admiral:27017", "camden.node.gce-us-central1.admiral:27017" ], arbiters: [ "desmond.node.gce-us-east1.admiral:27017", "flint.node.gce-us-west1.admiral:27017" ], setName: "sessions_gce_us_central1", setVersion: 6, ismaster: false, secondary: true, primary: "camden.node.gce-us-central1.admiral:27017", me: "percy.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393882, 764), t: 4 }, lastWriteDate: new Date(1547393882000), majorityOpTime: { ts: Timestamp(1547393882, 702), t: 4 }, majorityWriteDate: new Date(1547393882000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393882742), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393882, 764), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393882, 702), $configServerState: { opTime: { ts: Timestamp(1547393880, 184), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393882, 773), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.748+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating percy.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:38:02.000+0000 Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.748+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating percy.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393882, 764), t: 4 } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.748+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_central1 took 43 msec Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.748+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_west1 Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.748+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.787+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.788+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host tony.node.gce-us-west1.admiral:27017 based on ismaster reply: { hosts: [ "william.node.gce-us-west1.admiral:27017", "tony.node.gce-us-west1.admiral:27017", "chloe.node.gce-us-central1.admiral:27017" ], arbiters: [ "sarah.node.gce-us-east1.admiral:27017", "gerda.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_west1", setVersion: 82625, ismaster: true, secondary: false, primary: "tony.node.gce-us-west1.admiral:27017", me: "tony.node.gce-us-west1.admiral:27017", electionId: ObjectId('7fffffff000000000000001c'), lastWrite: { opTime: { ts: Timestamp(1547393882, 846), t: 28 }, lastWriteDate: new Date(1547393882000), majorityOpTime: { ts: Timestamp(1547393882, 745), t: 28 }, majorityWriteDate: new Date(1547393882000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393882763), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393882, 846), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff000000000000001c') }, lastCommittedOpTime: Timestamp(1547393882, 745), $configServerState: { opTime: { ts: Timestamp(1547393882, 505), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393882, 847), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.788+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating tony.node.gce-us-west1.admiral:27017 lastWriteDate to 2019-01-13T15:38:02.000+0000 Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.788+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating tony.node.gce-us-west1.admiral:27017 opTime to { ts: Timestamp(1547393882, 846), t: 28 } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.788+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.827+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.827+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host william.node.gce-us-west1.admiral:27017 based on ismaster reply: { hosts: [ "william.node.gce-us-west1.admiral:27017", "tony.node.gce-us-west1.admiral:27017", "chloe.node.gce-us-central1.admiral:27017" ], arbiters: [ "sarah.node.gce-us-east1.admiral:27017", "gerda.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_west1", setVersion: 82625, ismaster: false, secondary: true, primary: "tony.node.gce-us-west1.admiral:27017", me: "william.node.gce-us-west1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393882, 869), t: 28 }, lastWriteDate: new Date(1547393882000), majorityOpTime: { ts: Timestamp(1547393882, 776), t: 28 }, majorityWriteDate: new Date(1547393882000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393882803), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393882, 869), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393882, 776), $configServerState: { opTime: { ts: Timestamp(1547393864, 813), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393882, 870), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.827+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating william.node.gce-us-west1.admiral:27017 lastWriteDate to 2019-01-13T15:38:02.000+0000 Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.827+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating william.node.gce-us-west1.admiral:27017 opTime to { ts: Timestamp(1547393882, 869), t: 28 } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.827+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.830+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.830+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host chloe.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "william.node.gce-us-west1.admiral:27017", "tony.node.gce-us-west1.admiral:27017", "chloe.node.gce-us-central1.admiral:27017" ], arbiters: [ "sarah.node.gce-us-east1.admiral:27017", "gerda.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_west1", setVersion: 82625, ismaster: false, secondary: true, primary: "tony.node.gce-us-west1.admiral:27017", me: "chloe.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393882, 863), t: 28 }, lastWriteDate: new Date(1547393882000), majorityOpTime: { ts: Timestamp(1547393882, 776), t: 28 }, majorityWriteDate: new Date(1547393882000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393882824), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393882, 863), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393882, 776), $configServerState: { opTime: { ts: Timestamp(1547393872, 245), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393882, 864), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.830+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating chloe.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:38:02.000+0000 Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.830+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating chloe.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393882, 863), t: 28 } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.830+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_west1 took 82 msec Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.830+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_europe_west1 Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.830+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.930+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.930+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host vivi.node.gce-europe-west1.admiral:27017 based on ismaster reply: { hosts: [ "vivi.node.gce-europe-west1.admiral:27017", "hilda.node.gce-europe-west2.admiral:27017" ], arbiters: [ "hubert.node.gce-europe-west3.admiral:27017" ], setName: "sessions_gce_europe_west1", setVersion: 4, ismaster: true, secondary: false, primary: "vivi.node.gce-europe-west1.admiral:27017", me: "vivi.node.gce-europe-west1.admiral:27017", electionId: ObjectId('7fffffff0000000000000009'), lastWrite: { opTime: { ts: Timestamp(1547393882, 867), t: 9 }, lastWriteDate: new Date(1547393882000), majorityOpTime: { ts: Timestamp(1547393882, 840), t: 9 }, majorityWriteDate: new Date(1547393882000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393882875), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393882, 867), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000009') }, lastCommittedOpTime: Timestamp(1547393882, 840), $configServerState: { opTime: { ts: Timestamp(1547393882, 505), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393882, 867), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.930+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating vivi.node.gce-europe-west1.admiral:27017 lastWriteDate to 2019-01-13T15:38:02.000+0000 Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.930+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating vivi.node.gce-europe-west1.admiral:27017 opTime to { ts: Timestamp(1547393882, 867), t: 9 } Jan 13 15:38:02 ivy mongos[27723]: 2019-01-13T15:38:02.930+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.026+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.026+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host hilda.node.gce-europe-west2.admiral:27017 based on ismaster reply: { hosts: [ "vivi.node.gce-europe-west1.admiral:27017", "hilda.node.gce-europe-west2.admiral:27017" ], arbiters: [ "hubert.node.gce-europe-west3.admiral:27017" ], setName: "sessions_gce_europe_west1", setVersion: 4, ismaster: false, secondary: true, primary: "vivi.node.gce-europe-west1.admiral:27017", me: "hilda.node.gce-europe-west2.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393882, 959), t: 9 }, lastWriteDate: new Date(1547393882000), majorityOpTime: { ts: Timestamp(1547393882, 894), t: 9 }, majorityWriteDate: new Date(1547393882000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393882974), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393882, 959), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000008') }, lastCommittedOpTime: Timestamp(1547393882, 894), $configServerState: { opTime: { ts: Timestamp(1547393877, 1168), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393882, 959), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.026+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating hilda.node.gce-europe-west2.admiral:27017 lastWriteDate to 2019-01-13T15:38:02.000+0000 Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.026+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating hilda.node.gce-europe-west2.admiral:27017 opTime to { ts: Timestamp(1547393882, 959), t: 9 } Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.026+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_europe_west1 took 195 msec Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.026+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_europe_west2 Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.026+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.122+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.122+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host ignis.node.gce-europe-west2.admiral:27017 based on ismaster reply: { hosts: [ "ignis.node.gce-europe-west2.admiral:27017", "keith.node.gce-europe-west3.admiral:27017" ], arbiters: [ "francis.node.gce-europe-west1.admiral:27017" ], setName: "sessions_gce_europe_west2", setVersion: 6, ismaster: true, secondary: false, primary: "ignis.node.gce-europe-west2.admiral:27017", me: "ignis.node.gce-europe-west2.admiral:27017", electionId: ObjectId('7fffffff0000000000000004'), lastWrite: { opTime: { ts: Timestamp(1547393883, 18), t: 4 }, lastWriteDate: new Date(1547393883000), majorityOpTime: { ts: Timestamp(1547393882, 983), t: 4 }, majorityWriteDate: new Date(1547393882000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393883070), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393883, 18), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000004') }, lastCommittedOpTime: Timestamp(1547393882, 983), $configServerState: { opTime: { ts: Timestamp(1547393882, 584), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393883, 18), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.122+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ignis.node.gce-europe-west2.admiral:27017 lastWriteDate to 2019-01-13T15:38:03.000+0000 Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.122+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ignis.node.gce-europe-west2.admiral:27017 opTime to { ts: Timestamp(1547393883, 18), t: 4 } Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.122+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.229+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.229+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host keith.node.gce-europe-west3.admiral:27017 based on ismaster reply: { hosts: [ "ignis.node.gce-europe-west2.admiral:27017", "keith.node.gce-europe-west3.admiral:27017" ], arbiters: [ "francis.node.gce-europe-west1.admiral:27017" ], setName: "sessions_gce_europe_west2", setVersion: 6, ismaster: false, secondary: true, primary: "ignis.node.gce-europe-west2.admiral:27017", me: "keith.node.gce-europe-west3.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393883, 61), t: 4 }, lastWriteDate: new Date(1547393883000), majorityOpTime: { ts: Timestamp(1547393883, 61), t: 4 }, majorityWriteDate: new Date(1547393883000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393883171), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393883, 61), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393883, 61), $configServerState: { opTime: { ts: Timestamp(1547393882, 584), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393883, 72), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.229+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating keith.node.gce-europe-west3.admiral:27017 lastWriteDate to 2019-01-13T15:38:03.000+0000 Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.229+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating keith.node.gce-europe-west3.admiral:27017 opTime to { ts: Timestamp(1547393883, 61), t: 4 } Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.229+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_europe_west2 took 202 msec Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.229+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_europe_west3 Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.229+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.335+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.335+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host albert.node.gce-europe-west3.admiral:27017 based on ismaster reply: { hosts: [ "albert.node.gce-europe-west3.admiral:27017", "jordan.node.gce-europe-west1.admiral:27017" ], arbiters: [ "garry.node.gce-europe-west2.admiral:27017" ], setName: "sessions_gce_europe_west3", setVersion: 6, ismaster: true, secondary: false, primary: "albert.node.gce-europe-west3.admiral:27017", me: "albert.node.gce-europe-west3.admiral:27017", electionId: ObjectId('7fffffff000000000000000a'), lastWrite: { opTime: { ts: Timestamp(1547393883, 133), t: 10 }, lastWriteDate: new Date(1547393883000), majorityOpTime: { ts: Timestamp(1547393883, 112), t: 10 }, majorityWriteDate: new Date(1547393883000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393883277), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393883, 133), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff000000000000000a') }, lastCommittedOpTime: Timestamp(1547393883, 112), $configServerState: { opTime: { ts: Timestamp(1547393883, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393883, 133), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.335+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating albert.node.gce-europe-west3.admiral:27017 lastWriteDate to 2019-01-13T15:38:03.000+0000 Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.335+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating albert.node.gce-europe-west3.admiral:27017 opTime to { ts: Timestamp(1547393883, 133), t: 10 } Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.335+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.436+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.436+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host jordan.node.gce-europe-west1.admiral:27017 based on ismaster reply: { hosts: [ "albert.node.gce-europe-west3.admiral:27017", "jordan.node.gce-europe-west1.admiral:27017" ], arbiters: [ "garry.node.gce-europe-west2.admiral:27017" ], setName: "sessions_gce_europe_west3", setVersion: 6, ismaster: false, secondary: true, primary: "albert.node.gce-europe-west3.admiral:27017", me: "jordan.node.gce-europe-west1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393883, 173), t: 10 }, lastWriteDate: new Date(1547393883000), majorityOpTime: { ts: Timestamp(1547393883, 170), t: 10 }, majorityWriteDate: new Date(1547393883000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393883381), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393883, 173), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000009') }, lastCommittedOpTime: Timestamp(1547393883, 170), $configServerState: { opTime: { ts: Timestamp(1547393858, 641), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393883, 248), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.436+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jordan.node.gce-europe-west1.admiral:27017 lastWriteDate to 2019-01-13T15:38:03.000+0000 Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.436+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jordan.node.gce-europe-west1.admiral:27017 opTime to { ts: Timestamp(1547393883, 173), t: 10 } Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.436+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_europe_west3 took 207 msec Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.436+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_east1_2 Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.436+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.474+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.474+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host queen.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: true, secondary: false, primary: "queen.node.gce-us-east1.admiral:27017", me: "queen.node.gce-us-east1.admiral:27017", electionId: ObjectId('7fffffff0000000000000003'), lastWrite: { opTime: { ts: Timestamp(1547393883, 355), t: 3 }, lastWriteDate: new Date(1547393883000), majorityOpTime: { ts: Timestamp(1547393883, 269), t: 3 }, majorityWriteDate: new Date(1547393883000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393883453), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393883, 379), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000003') }, lastCommittedOpTime: Timestamp(1547393883, 269), $configServerState: { opTime: { ts: Timestamp(1547393883, 114), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393883, 379), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.474+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating queen.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:38:03.000+0000 Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.474+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating queen.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393883, 355), t: 3 } Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.474+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.475+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.476+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host ralph.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: false, secondary: true, primary: "queen.node.gce-us-east1.admiral:27017", me: "ralph.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393883, 325), t: 3 }, lastWriteDate: new Date(1547393883000), majorityOpTime: { ts: Timestamp(1547393883, 269), t: 3 }, majorityWriteDate: new Date(1547393883000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393883470), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393883, 325), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393883, 269), $configServerState: { opTime: { ts: Timestamp(1547393870, 282), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393883, 353), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.476+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ralph.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:38:03.000+0000 Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.476+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ralph.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393883, 325), t: 3 } Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.476+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.514+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.514+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host april.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: false, secondary: true, primary: "queen.node.gce-us-east1.admiral:27017", me: "april.node.gce-us-east1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393883, 396), t: 3 }, lastWriteDate: new Date(1547393883000), majorityOpTime: { ts: Timestamp(1547393883, 325), t: 3 }, majorityWriteDate: new Date(1547393883000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393883490), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393883, 396), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393883, 325), $configServerState: { opTime: { ts: Timestamp(1547393881, 1), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393883, 396), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.514+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating april.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:38:03.000+0000 Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.514+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating april.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393883, 396), t: 3 } Jan 13 15:38:03 ivy mongos[27723]: 2019-01-13T15:38:03.514+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_east1_2 took 77 msec Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.080+0000 D TRACKING [Uptime reporter] Cmd: NotSet, TrackingId: 5c3b5b5ca1824195fadc10b8 Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.080+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 186 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:34.080+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393884079), up: 173, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.080+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 186 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:34.080+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393884079), up: 173, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.080+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.080+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.082+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.082+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.307+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.307+0000 D ASIO [ShardRegistry] Request 186 finished with response: { n: 1, nModified: 1, opTime: { ts: Timestamp(1547393884, 21), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393884, 21), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393884, 21), t: 1 }, lastOpVisible: { ts: Timestamp(1547393884, 21), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393884, 21), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393884, 21), $clusterTime: { clusterTime: Timestamp(1547393884, 234), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.307+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ n: 1, nModified: 1, opTime: { ts: Timestamp(1547393884, 21), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393884, 21), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393884, 21), t: 1 }, lastOpVisible: { ts: Timestamp(1547393884, 21), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393884, 21), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393884, 21), $clusterTime: { clusterTime: Timestamp(1547393884, 234), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.307+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.307+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 187 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:38:34.307+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393884, 21), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.307+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 187 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:38:34.307+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393884, 21), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.307+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.307+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.307+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.307+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.354+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.354+0000 D ASIO [ShardRegistry] Request 187 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393884, 21), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393884, 21), t: 1 }, lastOpVisible: { ts: Timestamp(1547393884, 21), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393884, 21), $clusterTime: { clusterTime: Timestamp(1547393884, 234), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.354+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393884, 21), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393884, 21), t: 1 }, lastOpVisible: { ts: Timestamp(1547393884, 21), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393884, 21), $clusterTime: { clusterTime: Timestamp(1547393884, 234), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.354+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.354+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 188 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:38:34.354+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393884, 21), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.354+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 188 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:38:34.354+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393884, 21), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.354+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.354+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.354+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.354+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.393+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.393+0000 D ASIO [ShardRegistry] Request 188 finished with response: { cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393884, 234), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393884, 21), t: 1 }, lastOpVisible: { ts: Timestamp(1547393884, 21), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393884, 21), $clusterTime: { clusterTime: Timestamp(1547393884, 234), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.393+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393884, 234), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393884, 21), t: 1 }, lastOpVisible: { ts: Timestamp(1547393884, 21), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393884, 21), $clusterTime: { clusterTime: Timestamp(1547393884, 234), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.393+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.393+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 189 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:34.393+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393884, 21), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.393+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 189 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:34.393+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393884, 21), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.393+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.393+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.393+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.393+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.430+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.430+0000 D ASIO [ShardRegistry] Request 189 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393884, 234), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393884, 21), t: 1 }, lastOpVisible: { ts: Timestamp(1547393884, 21), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393884, 21), $clusterTime: { clusterTime: Timestamp(1547393884, 274), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.430+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393884, 234), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393884, 21), t: 1 }, lastOpVisible: { ts: Timestamp(1547393884, 21), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393884, 21), $clusterTime: { clusterTime: Timestamp(1547393884, 274), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:04 ivy mongos[27723]: 2019-01-13T15:38:04.430+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:10 ivy mongos[27723]: 2019-01-13T15:38:10.167+0000 D COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms Jan 13 15:38:10 ivy mongos[27723]: 2019-01-13T15:38:10.167+0000 D COMMAND [PeriodicTaskRunner] task: DBConnectionPool-cleaner took: 0ms Jan 13 15:38:10 ivy mongos[27723]: 2019-01-13T15:38:10.167+0000 D - [PeriodicTaskRunner] cleaning up unused lock buckets of the global lock manager Jan 13 15:38:10 ivy mongos[27723]: 2019-01-13T15:38:10.167+0000 D COMMAND [PeriodicTaskRunner] task: UnusedLockCleaner took: 0ms Jan 13 15:38:13 ivy mongos[27723]: 2019-01-13T15:38:13.716+0000 D SHARDING [conn42] Command begin db: admin msg id: 157 Jan 13 15:38:13 ivy mongos[27723]: 2019-01-13T15:38:13.716+0000 D SHARDING [conn42] Command end db: admin msg id: 157 Jan 13 15:38:13 ivy mongos[27723]: 2019-01-13T15:38:13.716+0000 I COMMAND [conn42] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:178 protocol:op_query 0ms Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.223+0000 D SHARDING [conn42] Command begin db: admin msg id: 159 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.223+0000 D SHARDING [conn42] Command end db: admin msg id: 159 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.223+0000 I COMMAND [conn42] query admin.1 command: { buildInfo: "1", $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:1340 0ms Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.224+0000 D SHARDING [conn42] Command begin db: admin msg id: 161 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.224+0000 D NETWORK [conn42] Starting server-side compression negotiation Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.224+0000 D NETWORK [conn42] Compression negotiation not requested by client Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.224+0000 D SHARDING [conn42] Command end db: admin msg id: 161 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.224+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.227+0000 D SHARDING [conn42] Command begin db: admin msg id: 163 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.227+0000 D SHARDING [conn42] Command end db: admin msg id: 163 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.227+0000 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $db: "admin" } numYields:0 reslen:10255 protocol:op_query 0ms Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.228+0000 D SHARDING [conn42] Command begin db: config msg id: 165 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.228+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 190 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.228+0000 D ASIO [conn42] startCommand: RemoteCommand 190 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.228+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.228+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.228+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.228+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.266+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.266+0000 D ASIO [conn42] Request 190 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393894, 11), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 11), $clusterTime: { clusterTime: Timestamp(1547393894, 112), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.266+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393894, 11), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 11), $clusterTime: { clusterTime: Timestamp(1547393894, 112), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.266+0000 D SHARDING [conn42] Command end db: config msg id: 165 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.266+0000 I COMMAND [conn42] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 38ms Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.267+0000 D SHARDING [conn42] Command begin db: config msg id: 167 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.267+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b66a1824195fadc10c2 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.267+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 191 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.267+0000 D ASIO [conn42] startCommand: RemoteCommand 191 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.267+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.267+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.267+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.267+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.333+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.333+0000 D ASIO [ShardRegistry] Request 191 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393894, 11), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393894, 11), t: 1 }, lastOpVisible: { ts: Timestamp(1547393894, 11), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393884, 21), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 11), $clusterTime: { clusterTime: Timestamp(1547393894, 169), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.333+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393894, 11), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393894, 11), t: 1 }, lastOpVisible: { ts: Timestamp(1547393894, 11), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393884, 21), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 11), $clusterTime: { clusterTime: Timestamp(1547393894, 169), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.333+0000 D SHARDING [conn42] Command end db: config msg id: 167 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.333+0000 I COMMAND [conn42] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 66ms Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.335+0000 D SHARDING [conn42] Command begin db: config msg id: 169 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.335+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 192 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.335+0000 D ASIO [conn42] startCommand: RemoteCommand 192 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.335+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.335+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.335+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.335+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.372+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.372+0000 D ASIO [conn42] Request 192 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393894, 11), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 11), $clusterTime: { clusterTime: Timestamp(1547393894, 169), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.372+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393894, 11), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 11), $clusterTime: { clusterTime: Timestamp(1547393894, 169), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.372+0000 D SHARDING [conn42] Command end db: config msg id: 169 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.372+0000 I COMMAND [conn42] query config.settings command: { find: "settings", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:315 37ms Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.372+0000 D SHARDING [conn42] Command begin db: config msg id: 171 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.373+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b66a1824195fadc10c5 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.373+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 193 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393294372) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.373+0000 D ASIO [conn42] startCommand: RemoteCommand 193 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393294372) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.373+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.373+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.373+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.373+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.430+0000 D TRACKING [Uptime reporter] Cmd: NotSet, TrackingId: 5c3b5b66a1824195fadc10c7 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.430+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 194 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:44.430+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393894430), up: 184, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.430+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 194 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:44.430+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393894430), up: 184, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.430+0000 I ASIO [ShardRegistry] Connecting to ira.node.gce-us-east1.admiral:27019 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.430+0000 D ASIO [ShardRegistry] Finished connection setup. Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.434+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.434+0000 D ASIO [ShardRegistry] Request 193 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393894, 11), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393894, 11), t: 1 }, lastOpVisible: { ts: Timestamp(1547393894, 11), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393884, 21), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 11), $clusterTime: { clusterTime: Timestamp(1547393894, 284), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.434+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393894, 11), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393894, 11), t: 1 }, lastOpVisible: { ts: Timestamp(1547393894, 11), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393884, 21), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 11), $clusterTime: { clusterTime: Timestamp(1547393894, 284), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.434+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.434+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.434+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.434+0000 D SHARDING [conn42] Command end db: config msg id: 171 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.434+0000 I COMMAND [conn42] query config.changelog command: { aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393294372) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:245 62ms Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.435+0000 D SHARDING [conn42] Command begin db: config msg id: 173 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.435+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 195 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.435+0000 D ASIO [conn42] startCommand: RemoteCommand 195 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.435+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.435+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.435+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.435+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.471+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.471+0000 D ASIO [conn42] Request 195 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393894, 286), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 11), $clusterTime: { clusterTime: Timestamp(1547393894, 286), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.471+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393894, 286), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 11), $clusterTime: { clusterTime: Timestamp(1547393894, 286), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.471+0000 D SHARDING [conn42] Command end db: config msg id: 173 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.471+0000 I COMMAND [conn42] query config.shards command: { find: "shards", filter: {}, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:1834 36ms Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.472+0000 D SHARDING [conn42] Command begin db: config msg id: 175 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.472+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 196 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.472+0000 D ASIO [conn42] startCommand: RemoteCommand 196 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.472+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.472+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.472+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.472+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.506+0000 D NETWORK [ShardRegistry] Starting client-side compression negotiation Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.506+0000 D NETWORK [ShardRegistry] Offering snappy compressor to server Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.507+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.508+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.508+0000 D ASIO [conn42] Request 196 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393894, 319), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 11), $clusterTime: { clusterTime: Timestamp(1547393894, 326), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.508+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393894, 319), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 11), $clusterTime: { clusterTime: Timestamp(1547393894, 326), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.508+0000 D SHARDING [conn42] Command end db: config msg id: 175 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.508+0000 I COMMAND [conn42] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 36ms Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.509+0000 D SHARDING [conn42] Command begin db: config msg id: 177 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.509+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b66a1824195fadc10cb Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.509+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 197 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.509+0000 D ASIO [conn42] startCommand: RemoteCommand 197 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.545+0000 D NETWORK [ShardRegistry] Finishing client-side compression negotiation Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.545+0000 D NETWORK [ShardRegistry] Received message compressors from server Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.545+0000 D NETWORK [ShardRegistry] Adding compressor snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.545+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.545+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.545+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.545+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.615+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.615+0000 D ASIO [ShardRegistry] Request 197 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393894, 319), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393894, 285), t: 1 }, lastOpVisible: { ts: Timestamp(1547393894, 285), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 285), $clusterTime: { clusterTime: Timestamp(1547393894, 355), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.615+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393894, 319), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393894, 285), t: 1 }, lastOpVisible: { ts: Timestamp(1547393894, 285), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 285), $clusterTime: { clusterTime: Timestamp(1547393894, 355), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.615+0000 D SHARDING [conn42] Command end db: config msg id: 177 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.615+0000 I COMMAND [conn42] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 106ms Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.616+0000 D SHARDING [conn42] Command begin db: config msg id: 179 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.616+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b66a1824195fadc10cd Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.616+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 198 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.616+0000 D ASIO [conn42] startCommand: RemoteCommand 198 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.616+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.616+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.616+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.658+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.658+0000 D ASIO [ShardRegistry] Request 194 finished with response: { n: 1, nModified: 1, opTime: { ts: Timestamp(1547393894, 286), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393894, 286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393894, 319), t: 1 }, lastOpVisible: { ts: Timestamp(1547393894, 319), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393894, 286), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 319), $clusterTime: { clusterTime: Timestamp(1547393894, 380), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.658+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ n: 1, nModified: 1, opTime: { ts: Timestamp(1547393894, 286), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393894, 286), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393894, 319), t: 1 }, lastOpVisible: { ts: Timestamp(1547393894, 319), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393894, 286), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 319), $clusterTime: { clusterTime: Timestamp(1547393894, 380), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.659+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 199 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:44.659+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393894, 319), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.659+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 199 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:44.659+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393894, 319), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.659+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.659+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.659+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.659+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.660+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.660+0000 D ASIO [ShardRegistry] Request 198 finished with response: { cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393894, 319), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393894, 319), t: 1 }, lastOpVisible: { ts: Timestamp(1547393894, 319), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 319), $clusterTime: { clusterTime: Timestamp(1547393894, 450), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.660+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393894, 319), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393894, 319), t: 1 }, lastOpVisible: { ts: Timestamp(1547393894, 319), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 319), $clusterTime: { clusterTime: Timestamp(1547393894, 450), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.660+0000 D SHARDING [conn42] Command end db: config msg id: 179 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.660+0000 I COMMAND [conn42] query config.databases command: { aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:270 44ms Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.661+0000 D SHARDING [conn42] Command begin db: config msg id: 181 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.661+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 200 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.661+0000 D ASIO [conn42] startCommand: RemoteCommand 200 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.661+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.661+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.661+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.661+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.695+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.695+0000 D ASIO [ShardRegistry] Request 199 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393894, 319), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393894, 319), t: 1 }, lastOpVisible: { ts: Timestamp(1547393894, 319), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393894, 286), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 319), $clusterTime: { clusterTime: Timestamp(1547393894, 450), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.695+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393894, 319), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393894, 319), t: 1 }, lastOpVisible: { ts: Timestamp(1547393894, 319), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393894, 286), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 319), $clusterTime: { clusterTime: Timestamp(1547393894, 450), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.695+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 201 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:38:44.695+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393894, 319), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.695+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 201 -- target:jasper.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:38:44.695+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393894, 319), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.695+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.695+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.695+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.695+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.695+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.697+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.697+0000 D ASIO [conn42] Request 200 finished with response: { n: 3, ok: 1.0, operationTime: Timestamp(1547393894, 319), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 319), $clusterTime: { clusterTime: Timestamp(1547393894, 450), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.697+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 3, ok: 1.0, operationTime: Timestamp(1547393894, 319), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 319), $clusterTime: { clusterTime: Timestamp(1547393894, 450), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.697+0000 D SHARDING [conn42] Command end db: config msg id: 181 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.697+0000 I COMMAND [conn42] query config.collections command: { count: "collections", query: { dropped: false }, $db: "config" } numYields:0 reslen:210 36ms Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.698+0000 D SHARDING [conn42] Command begin db: config msg id: 183 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.698+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 202 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393294697) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.698+0000 D ASIO [conn42] startCommand: RemoteCommand 202 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393294697) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.698+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.698+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.698+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.698+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.734+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.735+0000 D ASIO [conn42] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Request 202 finished with response: { cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393888704), up: 3487085, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393886681), up: 3433223, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393885765), up: 3486983, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393889408), up: 828, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393886133), up: 74832, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393893354), up: 74865, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393892633), up: 74838, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393889785), up: 74807, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393886196), up: 74804, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393886673), up: 74776, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.node.gce-us-eas Jan 13 15:38:14 ivy mongos[27723]: t1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393894441), up: 74756, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393889343), up: 74779, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393893113), up: 74755, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393887417), up: 74723, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393890854), up: 74727, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393888861), up: 74670, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393892563), up: 74703, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393886607), up: 74697, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393887478), up: 74669, waiting: true }, { _id: "jacob:27 .......... 5278, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393891177), up: 75242, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393889821), up: 75277, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393891974), up: 76037, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", Jan 13 15:38:14 ivy mongos[27723]: "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393888115), up: 76093, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393887590), up: 76094, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393886864), up: 76032, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393889240), up: 76624, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393889039), up: 76624, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393885678), up: 76561, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393885642), up: 76411, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393885676), up: 76561, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393885645), up: 76349, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393889603), up: 76415, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393885648), up: 76349, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393885643), up: 76223, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393887279), up: 76288, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393887279), Jan 13 15:38:14 ivy mongos[27723]: up: 76289, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393894430), up: 184, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393885645), up: 76162, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393894423), up: 76232, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393894, 319), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 319), $clusterTime: { clusterTime: Timestamp(1547393894, 483), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.735+0000 D EXECUTOR [conn42] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393888704), up: 3487085, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393886681), up: 3433223, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393885765), up: 3486983, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393889408), up: 828, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393886133), up: 74832, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393893354), up: 74865, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393892633), up: 74838, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393889785), up: 74807, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393886196), up: 74804, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393886673), up: 74776, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.no Jan 13 15:38:14 ivy mongos[27723]: de.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393894441), up: 74756, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393889343), up: 74779, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393893113), up: 74755, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393887417), up: 74723, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393890854), up: 74727, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393888861), up: 74670, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393892563), up: 74703, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393886607), up: 74697, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393887478), up: 74669, waiting: true }, { _ .......... 5278, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393891177), up: 75242, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393889821), up: 75277, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393891974), up: 76037, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", Jan 13 15:38:14 ivy mongos[27723]: "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393888115), up: 76093, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393887590), up: 76094, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393886864), up: 76032, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393889240), up: 76624, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393889039), up: 76624, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393885678), up: 76561, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393885642), up: 76411, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393885676), up: 76561, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393885645), up: 76349, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393889603), up: 76415, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393885648), up: 76349, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393885643), up: 76223, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393887279), up: 76288, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393887279), Jan 13 15:38:14 ivy mongos[27723]: up: 76289, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393894430), up: 184, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393885645), up: 76162, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393894423), up: 76232, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393894, 319), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 319), $clusterTime: { clusterTime: Timestamp(1547393894, 483), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.735+0000 D SHARDING [conn42] Command end db: config msg id: 183 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.735+0000 I COMMAND [conn42] query config.mongos command: { find: "mongos", filter: { ping: { $gte: new Date(1547393294697) } }, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:63 reslen:9894 37ms Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.736+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.736+0000 D ASIO [ShardRegistry] Request 201 finished with response: { cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393894, 319), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393894, 319), t: 1 }, lastOpVisible: { ts: Timestamp(1547393894, 319), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393894, 319), $clusterTime: { clusterTime: Timestamp(1547393894, 450), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.736+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393894, 319), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393894, 319), t: 1 }, lastOpVisible: { ts: Timestamp(1547393894, 319), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 3 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393894, 319), $clusterTime: { clusterTime: Timestamp(1547393894, 450), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.736+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 203 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:44.736+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393894, 319), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.736+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 203 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:44.736+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393894, 319), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.736+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.736+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.736+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.736+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.736+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.736+0000 D SHARDING [conn42] Command begin db: config msg id: 185 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.736+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 204 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.736+0000 D ASIO [conn42] startCommand: RemoteCommand 204 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.737+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.737+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.737+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.737+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.773+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.773+0000 D ASIO [conn42] Request 204 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393894, 319), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 319), $clusterTime: { clusterTime: Timestamp(1547393894, 483), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.773+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393894, 319), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393894, 319), $clusterTime: { clusterTime: Timestamp(1547393894, 483), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.774+0000 D SHARDING [conn42] Command end db: config msg id: 185 Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.774+0000 I COMMAND [conn42] query config.locks command: { find: "locks", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:241 37ms Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.775+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.776+0000 D ASIO [ShardRegistry] Request 203 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393894, 319), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393894, 319), t: 1 }, lastOpVisible: { ts: Timestamp(1547393894, 319), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393894, 319), $clusterTime: { clusterTime: Timestamp(1547393894, 483), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.776+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393894, 319), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393894, 319), t: 1 }, lastOpVisible: { ts: Timestamp(1547393894, 319), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393894, 319), $clusterTime: { clusterTime: Timestamp(1547393894, 483), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:14 ivy mongos[27723]: 2019-01-13T15:38:14.776+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:19 ivy mongos[27723]: 2019-01-13T15:38:19.458+0000 D NETWORK [TaskExecutorPool-0] Compressing message with snappy Jan 13 15:38:19 ivy mongos[27723]: 2019-01-13T15:38:19.496+0000 D NETWORK [TaskExecutorPool-0] Decompressing message with snappy Jan 13 15:38:19 ivy mongos[27723]: 2019-01-13T15:38:19.496+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:24 ivy mongos[27723]: 2019-01-13T15:38:24.776+0000 D TRACKING [Uptime reporter] Cmd: NotSet, TrackingId: 5c3b5b70a1824195fadc10d5 Jan 13 15:38:24 ivy mongos[27723]: 2019-01-13T15:38:24.776+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 206 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:54.776+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393904776), up: 194, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:38:24 ivy mongos[27723]: 2019-01-13T15:38:24.776+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 206 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:54.776+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393904776), up: 194, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:38:24 ivy mongos[27723]: 2019-01-13T15:38:24.776+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:24 ivy mongos[27723]: 2019-01-13T15:38:24.776+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:24 ivy mongos[27723]: 2019-01-13T15:38:24.776+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:24 ivy mongos[27723]: 2019-01-13T15:38:24.776+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:24 ivy mongos[27723]: 2019-01-13T15:38:24.972+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:24 ivy mongos[27723]: 2019-01-13T15:38:24.972+0000 D ASIO [ShardRegistry] Request 206 finished with response: { n: 1, nModified: 1, opTime: { ts: Timestamp(1547393904, 525), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393904, 525), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393904, 525), t: 1 }, lastOpVisible: { ts: Timestamp(1547393904, 525), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393904, 525), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393904, 525), $clusterTime: { clusterTime: Timestamp(1547393904, 660), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:24 ivy mongos[27723]: 2019-01-13T15:38:24.972+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ n: 1, nModified: 1, opTime: { ts: Timestamp(1547393904, 525), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393904, 525), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393904, 525), t: 1 }, lastOpVisible: { ts: Timestamp(1547393904, 525), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393904, 525), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393904, 525), $clusterTime: { clusterTime: Timestamp(1547393904, 660), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:24 ivy mongos[27723]: 2019-01-13T15:38:24.973+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 207 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:54.973+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393904, 525), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:24 ivy mongos[27723]: 2019-01-13T15:38:24.973+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 207 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:54.973+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393904, 525), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:24 ivy mongos[27723]: 2019-01-13T15:38:24.973+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:24 ivy mongos[27723]: 2019-01-13T15:38:24.973+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:24 ivy mongos[27723]: 2019-01-13T15:38:24.973+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:24 ivy mongos[27723]: 2019-01-13T15:38:24.973+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:24 ivy mongos[27723]: 2019-01-13T15:38:24.973+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.009+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.009+0000 D ASIO [ShardRegistry] Request 207 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393904, 525), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393904, 525), t: 1 }, lastOpVisible: { ts: Timestamp(1547393904, 525), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393904, 525), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393904, 525), $clusterTime: { clusterTime: Timestamp(1547393904, 668), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.009+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393904, 525), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393904, 525), t: 1 }, lastOpVisible: { ts: Timestamp(1547393904, 525), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393904, 525), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393904, 525), $clusterTime: { clusterTime: Timestamp(1547393904, 668), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.009+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 208 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:55.009+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393904, 525), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.009+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 208 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:55.009+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393904, 525), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.009+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.009+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.009+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.009+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.009+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.046+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.046+0000 D ASIO [ShardRegistry] Request 208 finished with response: { cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393904, 569), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393904, 569), t: 1 }, lastOpVisible: { ts: Timestamp(1547393904, 569), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393904, 525), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393904, 569), $clusterTime: { clusterTime: Timestamp(1547393904, 711), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.046+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393904, 569), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393904, 569), t: 1 }, lastOpVisible: { ts: Timestamp(1547393904, 569), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393904, 525), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393904, 569), $clusterTime: { clusterTime: Timestamp(1547393904, 711), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.046+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 209 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:55.046+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393904, 569), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.046+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 209 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:38:55.046+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393904, 569), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.046+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.046+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.046+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.046+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.046+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.083+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.083+0000 D ASIO [ShardRegistry] Request 209 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393904, 569), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393904, 569), t: 1 }, lastOpVisible: { ts: Timestamp(1547393904, 569), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393904, 525), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393904, 569), $clusterTime: { clusterTime: Timestamp(1547393904, 711), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.083+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393904, 569), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393904, 569), t: 1 }, lastOpVisible: { ts: Timestamp(1547393904, 569), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393904, 525), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393904, 569), $clusterTime: { clusterTime: Timestamp(1547393904, 711), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:25 ivy mongos[27723]: 2019-01-13T15:38:25.083+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:28 ivy mongos[27723]: 2019-01-13T15:38:28.719+0000 D SHARDING [conn42] Command begin db: admin msg id: 187 Jan 13 15:38:28 ivy mongos[27723]: 2019-01-13T15:38:28.719+0000 D SHARDING [conn42] Command end db: admin msg id: 187 Jan 13 15:38:28 ivy mongos[27723]: 2019-01-13T15:38:28.719+0000 I COMMAND [conn42] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:178 protocol:op_query 0ms Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.223+0000 D SHARDING [conn42] Command begin db: admin msg id: 189 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.223+0000 D SHARDING [conn42] Command end db: admin msg id: 189 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.223+0000 I COMMAND [conn42] query admin.1 command: { buildInfo: "1", $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:1340 0ms Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.225+0000 D SHARDING [conn42] Command begin db: admin msg id: 191 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.225+0000 D NETWORK [conn42] Starting server-side compression negotiation Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.225+0000 D NETWORK [conn42] Compression negotiation not requested by client Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.225+0000 D SHARDING [conn42] Command end db: admin msg id: 191 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.225+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.228+0000 D SHARDING [conn42] Command begin db: admin msg id: 193 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.229+0000 D SHARDING [conn42] Command end db: admin msg id: 193 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.229+0000 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $db: "admin" } numYields:0 reslen:10255 protocol:op_query 0ms Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.230+0000 D SHARDING [conn42] Command begin db: config msg id: 195 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.230+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 210 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.230+0000 D ASIO [conn42] startCommand: RemoteCommand 210 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.230+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.230+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.230+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.230+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.266+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.266+0000 D ASIO [conn42] Request 210 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393909, 65), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393909, 50), $clusterTime: { clusterTime: Timestamp(1547393909, 117), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.266+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393909, 65), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393909, 50), $clusterTime: { clusterTime: Timestamp(1547393909, 117), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.266+0000 D SHARDING [conn42] Command end db: config msg id: 195 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.266+0000 I COMMAND [conn42] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 36ms Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.267+0000 D SHARDING [conn42] Command begin db: config msg id: 197 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.267+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b75a1824195fadc10df Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.267+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 211 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { count: { $sum: 1 }, _id: "$shard" } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.267+0000 D ASIO [conn42] startCommand: RemoteCommand 211 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { count: { $sum: 1 }, _id: "$shard" } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.267+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.267+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.267+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.267+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.333+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.333+0000 D ASIO [ShardRegistry] Request 211 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393909, 203), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393909, 50), t: 1 }, lastOpVisible: { ts: Timestamp(1547393909, 50), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393904, 525), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393909, 50), $clusterTime: { clusterTime: Timestamp(1547393909, 203), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.333+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393909, 203), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393909, 50), t: 1 }, lastOpVisible: { ts: Timestamp(1547393909, 50), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393904, 525), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393909, 50), $clusterTime: { clusterTime: Timestamp(1547393909, 203), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.334+0000 D SHARDING [conn42] Command end db: config msg id: 197 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.334+0000 I COMMAND [conn42] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { count: { $sum: 1 }, _id: "$shard" } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 66ms Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.334+0000 D SHARDING [conn42] Command begin db: config msg id: 199 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.334+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 212 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.334+0000 D ASIO [conn42] startCommand: RemoteCommand 212 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.334+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.334+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.334+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.334+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.371+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.371+0000 D ASIO [conn42] Request 212 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393909, 203), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393909, 65), $clusterTime: { clusterTime: Timestamp(1547393909, 203), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.371+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393909, 203), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393909, 65), $clusterTime: { clusterTime: Timestamp(1547393909, 203), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.371+0000 D SHARDING [conn42] Command end db: config msg id: 199 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.371+0000 I COMMAND [conn42] query config.settings command: { find: "settings", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:315 37ms Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.371+0000 D SHARDING [conn42] Command begin db: config msg id: 201 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.371+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b75a1824195fadc10e2 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.371+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 213 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393309371) } } }, { $group: { count: { $sum: 1 }, _id: { event: "$what", note: "$details.note" } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.371+0000 D ASIO [conn42] startCommand: RemoteCommand 213 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393309371) } } }, { $group: { count: { $sum: 1 }, _id: { event: "$what", note: "$details.note" } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.371+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.371+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.372+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.372+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.423+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.423+0000 D ASIO [ShardRegistry] Request 213 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393909, 203), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393909, 65), t: 1 }, lastOpVisible: { ts: Timestamp(1547393909, 65), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393904, 525), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393909, 65), $clusterTime: { clusterTime: Timestamp(1547393909, 231), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.423+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393909, 203), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393909, 65), t: 1 }, lastOpVisible: { ts: Timestamp(1547393909, 65), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393904, 525), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393909, 65), $clusterTime: { clusterTime: Timestamp(1547393909, 231), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.423+0000 D SHARDING [conn42] Command end db: config msg id: 201 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.423+0000 I COMMAND [conn42] query config.changelog command: { aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393309371) } } }, { $group: { count: { $sum: 1 }, _id: { event: "$what", note: "$details.note" } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:245 51ms Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.424+0000 D SHARDING [conn42] Command begin db: config msg id: 203 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.424+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 214 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.424+0000 D ASIO [conn42] startCommand: RemoteCommand 214 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.424+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.424+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.424+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.424+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.460+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.461+0000 D ASIO [conn42] Request 214 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393909, 203), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393909, 203), $clusterTime: { clusterTime: Timestamp(1547393909, 284), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.461+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393909, 203), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393909, 203), $clusterTime: { clusterTime: Timestamp(1547393909, 284), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.461+0000 D SHARDING [conn42] Command end db: config msg id: 203 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.461+0000 I COMMAND [conn42] query config.shards command: { find: "shards", filter: {}, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:1834 37ms Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.461+0000 D SHARDING [conn42] Command begin db: config msg id: 205 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.461+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 215 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.461+0000 D ASIO [conn42] startCommand: RemoteCommand 215 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.461+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.461+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.461+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.461+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.498+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.498+0000 D ASIO [conn42] Request 215 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393909, 203), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393909, 203), $clusterTime: { clusterTime: Timestamp(1547393909, 284), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.498+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393909, 203), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393909, 203), $clusterTime: { clusterTime: Timestamp(1547393909, 284), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.498+0000 D SHARDING [conn42] Command end db: config msg id: 205 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.498+0000 I COMMAND [conn42] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 36ms Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.498+0000 D SHARDING [conn42] Command begin db: config msg id: 207 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.498+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b75a1824195fadc10e6 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.498+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 216 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { count: { $sum: 1 }, _id: "$shard" } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.498+0000 D ASIO [conn42] startCommand: RemoteCommand 216 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { count: { $sum: 1 }, _id: "$shard" } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.498+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.498+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.498+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.498+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.572+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.572+0000 D ASIO [ShardRegistry] Request 216 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393909, 285), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393909, 203), t: 1 }, lastOpVisible: { ts: Timestamp(1547393909, 203), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393904, 525), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393909, 203), $clusterTime: { clusterTime: Timestamp(1547393909, 321), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.572+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393909, 285), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393909, 203), t: 1 }, lastOpVisible: { ts: Timestamp(1547393909, 203), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393904, 525), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393909, 203), $clusterTime: { clusterTime: Timestamp(1547393909, 321), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.573+0000 D SHARDING [conn42] Command end db: config msg id: 207 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.573+0000 I COMMAND [conn42] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { count: { $sum: 1 }, _id: "$shard" } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 74ms Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.573+0000 D SHARDING [conn42] Command begin db: config msg id: 209 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.573+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b75a1824195fadc10e8 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.573+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 217 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.573+0000 D ASIO [conn42] startCommand: RemoteCommand 217 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.573+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.573+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.573+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.573+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.610+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.610+0000 D ASIO [ShardRegistry] Request 217 finished with response: { cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393909, 433), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393909, 203), t: 1 }, lastOpVisible: { ts: Timestamp(1547393909, 203), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393904, 525), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393909, 203), $clusterTime: { clusterTime: Timestamp(1547393909, 433), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.610+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393909, 433), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393909, 203), t: 1 }, lastOpVisible: { ts: Timestamp(1547393909, 203), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393904, 525), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393909, 203), $clusterTime: { clusterTime: Timestamp(1547393909, 433), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.610+0000 D SHARDING [conn42] Command end db: config msg id: 209 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.610+0000 I COMMAND [conn42] query config.databases command: { aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:270 37ms Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.610+0000 D SHARDING [conn42] Command begin db: config msg id: 211 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.611+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 218 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.611+0000 D ASIO [conn42] startCommand: RemoteCommand 218 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.611+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.611+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.611+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.611+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.647+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.647+0000 D ASIO [conn42] Request 218 finished with response: { n: 3, ok: 1.0, operationTime: Timestamp(1547393909, 433), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393909, 203), $clusterTime: { clusterTime: Timestamp(1547393909, 433), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.647+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 3, ok: 1.0, operationTime: Timestamp(1547393909, 433), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393909, 203), $clusterTime: { clusterTime: Timestamp(1547393909, 433), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.648+0000 D SHARDING [conn42] Command end db: config msg id: 211 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.648+0000 I COMMAND [conn42] query config.collections command: { count: "collections", query: { dropped: false }, $db: "config" } numYields:0 reslen:210 37ms Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.650+0000 D SHARDING [conn42] Command begin db: config msg id: 213 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.650+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 219 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393309649) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.650+0000 D ASIO [conn42] startCommand: RemoteCommand 219 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393309649) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.650+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.650+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.650+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.650+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.688+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.689+0000 D ASIO [conn42] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Request 219 finished with response: { cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393909465), up: 3487106, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393907410), up: 3433243, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393906450), up: 3487004, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393899596), up: 838, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393906684), up: 74852, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393903554), up: 74875, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393902811), up: 74848, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393900081), up: 74818, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393906685), up: 74824, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393907104), up: 74796, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.node.gce-us-eas Jan 13 15:38:29 ivy mongos[27723]: t1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393904641), up: 74767, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393899599), up: 74789, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393903335), up: 74765, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393907712), up: 74744, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393901069), up: 74737, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393909285), up: 74691, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393902742), up: 74713, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393907041), up: 74718, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393907777), up: 74689, waiting: true }, { _id: "jacob:27 .......... 5288, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393901436), up: 75252, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393900044), up: 75287, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393902367), up: 76048, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", Jan 13 15:38:29 ivy mongos[27723]: "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393908867), up: 76114, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393908143), up: 76114, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393907496), up: 76053, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393899706), up: 76635, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393899424), up: 76635, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393906383), up: 76581, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393906348), up: 76432, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393906385), up: 76581, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393906352), up: 76369, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393899986), up: 76425, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393906353), up: 76369, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393906350), up: 76244, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393907920), up: 76308, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393907923), Jan 13 15:38:29 ivy mongos[27723]: up: 76309, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393904776), up: 194, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393906353), up: 76183, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393904778), up: 76242, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393909, 433), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393909, 285), $clusterTime: { clusterTime: Timestamp(1547393909, 450), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.689+0000 D EXECUTOR [conn42] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393909465), up: 3487106, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393907410), up: 3433243, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393906450), up: 3487004, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393899596), up: 838, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393906684), up: 74852, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393903554), up: 74875, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393902811), up: 74848, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393900081), up: 74818, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393906685), up: 74824, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393907104), up: 74796, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.no Jan 13 15:38:29 ivy mongos[27723]: de.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393904641), up: 74767, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393899599), up: 74789, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393903335), up: 74765, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393907712), up: 74744, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393901069), up: 74737, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393909285), up: 74691, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393902742), up: 74713, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393907041), up: 74718, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393907777), up: 74689, waiting: true }, { _ .......... 5288, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393901436), up: 75252, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393900044), up: 75287, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393902367), up: 76048, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", Jan 13 15:38:29 ivy mongos[27723]: "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393908867), up: 76114, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393908143), up: 76114, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393907496), up: 76053, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393899706), up: 76635, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393899424), up: 76635, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393906383), up: 76581, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393906348), up: 76432, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393906385), up: 76581, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393906352), up: 76369, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393899986), up: 76425, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393906353), up: 76369, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393906350), up: 76244, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393907920), up: 76308, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393907923), Jan 13 15:38:29 ivy mongos[27723]: up: 76309, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393904776), up: 194, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393906353), up: 76183, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393904778), up: 76242, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393909, 433), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393909, 285), $clusterTime: { clusterTime: Timestamp(1547393909, 450), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.690+0000 D SHARDING [conn42] Command end db: config msg id: 213 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.690+0000 I COMMAND [conn42] query config.mongos command: { find: "mongos", filter: { ping: { $gte: new Date(1547393309649) } }, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:63 reslen:9894 39ms Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.691+0000 D SHARDING [conn42] Command begin db: config msg id: 215 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.691+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 220 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.691+0000 D ASIO [conn42] startCommand: RemoteCommand 220 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.691+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.691+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.692+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.692+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.728+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.728+0000 D ASIO [conn42] Request 220 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393909, 458), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393909, 285), $clusterTime: { clusterTime: Timestamp(1547393909, 458), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.728+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393909, 458), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393909, 285), $clusterTime: { clusterTime: Timestamp(1547393909, 458), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.728+0000 D SHARDING [conn42] Command end db: config msg id: 215 Jan 13 15:38:29 ivy mongos[27723]: 2019-01-13T15:38:29.728+0000 I COMMAND [conn42] query config.locks command: { find: "locks", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:241 36ms Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.519+0000 D TRACKING [UserCacheInvalidator] Cmd: NotSet, TrackingId: 5c3b5b77a1824195fadc10ed Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.519+0000 D EXECUTOR [UserCacheInvalidator] Scheduling remote command request: RemoteCommand 221 -- target:ira.node.gce-us-east1.admiral:27019 db:admin expDate:2019-01-13T15:39:01.519+0000 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.519+0000 D ASIO [UserCacheInvalidator] startCommand: RemoteCommand 221 -- target:ira.node.gce-us-east1.admiral:27019 db:admin expDate:2019-01-13T15:39:01.519+0000 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.519+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.519+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.519+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.519+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.541+0000 D SHARDING [conn42] Command begin db: admin msg id: 217 Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.541+0000 D NETWORK [conn42] Starting server-side compression negotiation Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.541+0000 D NETWORK [conn42] Compression negotiation not requested by client Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.541+0000 D SHARDING [conn42] Command end db: admin msg id: 217 Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.541+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { ismaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.556+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.556+0000 D ASIO [ShardRegistry] Request 221 finished with response: { cacheGeneration: ObjectId('5c002e8aad899acfb0bbfd1e'), ok: 1.0, operationTime: Timestamp(1547393911, 342), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393911, 191), t: 1 }, lastOpVisible: { ts: Timestamp(1547393911, 191), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393904, 525), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393911, 191), $clusterTime: { clusterTime: Timestamp(1547393911, 351), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.556+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cacheGeneration: ObjectId('5c002e8aad899acfb0bbfd1e'), ok: 1.0, operationTime: Timestamp(1547393911, 342), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393911, 191), t: 1 }, lastOpVisible: { ts: Timestamp(1547393911, 191), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393904, 525), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393911, 191), $clusterTime: { clusterTime: Timestamp(1547393911, 351), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.556+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.649+0000 D SHARDING [shard registry reload] Reloading shardRegistry Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.649+0000 D TRACKING [shard registry reload] Cmd: NotSet, TrackingId: 5c3b5b77a1824195fadc10f0 Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.649+0000 D EXECUTOR [shard registry reload] Scheduling remote command request: RemoteCommand 222 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:39:01.649+0000 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393911, 191), t: 1 } }, maxTimeMS: 30000 } Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.649+0000 D ASIO [shard registry reload] startCommand: RemoteCommand 222 -- target:mateo.node.gce-us-west1.admiral:27019 db:config expDate:2019-01-13T15:39:01.649+0000 cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393911, 191), t: 1 } }, maxTimeMS: 30000 } Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.649+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.649+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.649+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.649+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.689+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.689+0000 D ASIO [ShardRegistry] Request 222 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393911, 342), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393911, 301), t: 1 }, lastOpVisible: { ts: Timestamp(1547393911, 301), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, l Jan 13 15:38:31 ivy mongos[27723]: astCommittedOpTime: Timestamp(1547393911, 301), $clusterTime: { clusterTime: Timestamp(1547393911, 363), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.689+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393911, 342), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393911, 301), t: 1 }, lastOpVisible: { ts: Timestamp(1547393911, 301), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 1 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000 Jan 13 15:38:31 ivy mongos[27723]: 000000') }, lastCommittedOpTime: Timestamp(1547393911, 301), $clusterTime: { clusterTime: Timestamp(1547393911, 363), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.689+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.689+0000 D SHARDING [shard registry reload] found 7 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1547393911, 301), t: 1 } Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.689+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017 Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.689+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_east1, with CS sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017 Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.689+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017 Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.689+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_central1, with CS sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017 Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.689+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017 Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.689+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_west1, with CS sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017 Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.689+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017 Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.689+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_europe_west1, with CS sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017 Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.689+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017 Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.689+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_europe_west2, with CS sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017 Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.689+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017 Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.689+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_europe_west3, with CS sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017 Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.689+0000 D NETWORK [shard registry reload] Started targeter for sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017 Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.689+0000 D SHARDING [shard registry reload] Adding shard sessions_gce_us_east1_2, with CS sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017 Jan 13 15:38:31 ivy mongos[27723]: 2019-01-13T15:38:31.689+0000 D SHARDING [shard registry reload] Adding shard config, with CS sessions_config/ira.node.gce-us-east1.admiral:27019,jasper.node.gce-us-west1.admiral:27019,kratos.node.gce-europe-west3.admiral:27019,leon.node.gce-us-east1.admiral:27019,mateo.node.gce-us-west1.admiral:27019,newton.node.gce-europe-west3.admiral:27019 Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.158+0000 D TRACKING [replSetDistLockPinger] Cmd: NotSet, TrackingId: 5c3b5b78a1824195fadc10f2 Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.158+0000 D EXECUTOR [replSetDistLockPinger] Scheduling remote command request: RemoteCommand 223 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:39:02.158+0000 cmd:{ findAndModify: "lockpings", query: { _id: "ivy:27018:1547393707:-6945163188777852108" }, update: { $set: { ping: new Date(1547393912158) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.158+0000 D ASIO [replSetDistLockPinger] startCommand: RemoteCommand 223 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:39:02.158+0000 cmd:{ findAndModify: "lockpings", query: { _id: "ivy:27018:1547393707:-6945163188777852108" }, update: { $set: { ping: new Date(1547393912158) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000 } Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.158+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.158+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.158+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.158+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.361+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.361+0000 D ASIO [ShardRegistry] Request 223 finished with response: { lastErrorObject: { n: 1, updatedExisting: true }, value: { _id: "ivy:27018:1547393707:-6945163188777852108", ping: new Date(1547393881943) }, ok: 1.0, operationTime: Timestamp(1547393912, 14), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393912, 14), t: 1 }, lastOpVisible: { ts: Timestamp(1547393912, 14), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393912, 14), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393912, 14), $clusterTime: { clusterTime: Timestamp(1547393912, 255), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.361+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ lastErrorObject: { n: 1, updatedExisting: true }, value: { _id: "ivy:27018:1547393707:-6945163188777852108", ping: new Date(1547393881943) }, ok: 1.0, operationTime: Timestamp(1547393912, 14), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393912, 14), t: 1 }, lastOpVisible: { ts: Timestamp(1547393912, 14), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393912, 14), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393912, 14), $clusterTime: { clusterTime: Timestamp(1547393912, 255), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.361+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.630+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_config Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.630+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.666+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.667+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host ira.node.gce-us-east1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: true, secondary: false, primary: "ira.node.gce-us-east1.admiral:27019", me: "ira.node.gce-us-east1.admiral:27019", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1547393912, 332), t: 1 }, lastWriteDate: new Date(1547393912000), majorityOpTime: { ts: Timestamp(1547393912, 153), t: 1 }, majorityWriteDate: new Date(1547393912000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393912647), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393912, 332), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393912, 153), $clusterTime: { clusterTime: Timestamp(1547393912, 360), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.667+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ira.node.gce-us-east1.admiral:27019 lastWriteDate to 2019-01-13T15:38:32.000+0000 Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.667+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ira.node.gce-us-east1.admiral:27019 opTime to { ts: Timestamp(1547393912, 332), t: 1 } Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.667+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.705+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.706+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host mateo.node.gce-us-west1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "mateo.node.gce-us-west1.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393912, 332), t: 1 }, lastWriteDate: new Date(1547393912000), majorityOpTime: { ts: Timestamp(1547393912, 153), t: 1 }, majorityWriteDate: new Date(1547393912000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393912681), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393912, 332), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393912, 153), $clusterTime: { clusterTime: Timestamp(1547393912, 434), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.706+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating mateo.node.gce-us-west1.admiral:27019 lastWriteDate to 2019-01-13T15:38:32.000+0000 Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.706+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating mateo.node.gce-us-west1.admiral:27019 opTime to { ts: Timestamp(1547393912, 332), t: 1 } Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.706+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.812+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.813+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host newton.node.gce-europe-west3.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "newton.node.gce-europe-west3.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393912, 332), t: 1 }, lastWriteDate: new Date(1547393912000), majorityOpTime: { ts: Timestamp(1547393912, 332), t: 1 }, majorityWriteDate: new Date(1547393912000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393912755), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393912, 332), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393912, 332), $clusterTime: { clusterTime: Timestamp(1547393912, 529), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.813+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating newton.node.gce-europe-west3.admiral:27019 lastWriteDate to 2019-01-13T15:38:32.000+0000 Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.813+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating newton.node.gce-europe-west3.admiral:27019 opTime to { ts: Timestamp(1547393912, 332), t: 1 } Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.813+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.919+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.919+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host kratos.node.gce-europe-west3.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "kratos.node.gce-europe-west3.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393912, 556), t: 1 }, lastWriteDate: new Date(1547393912000), majorityOpTime: { ts: Timestamp(1547393912, 332), t: 1 }, majorityWriteDate: new Date(1547393912000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393912860), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393912, 556), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393912, 332), $clusterTime: { clusterTime: Timestamp(1547393912, 563), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.919+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating kratos.node.gce-europe-west3.admiral:27019 lastWriteDate to 2019-01-13T15:38:32.000+0000 Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.919+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating kratos.node.gce-europe-west3.admiral:27019 opTime to { ts: Timestamp(1547393912, 556), t: 1 } Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.919+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.958+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.959+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host jasper.node.gce-us-west1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "jasper.node.gce-us-west1.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393912, 557), t: 1 }, lastWriteDate: new Date(1547393912000), majorityOpTime: { ts: Timestamp(1547393912, 332), t: 1 }, majorityWriteDate: new Date(1547393912000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393912936), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393912, 557), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393912, 332), $clusterTime: { clusterTime: Timestamp(1547393912, 626), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.959+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jasper.node.gce-us-west1.admiral:27019 lastWriteDate to 2019-01-13T15:38:32.000+0000 Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.959+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jasper.node.gce-us-west1.admiral:27019 opTime to { ts: Timestamp(1547393912, 557), t: 1 } Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.959+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.996+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.996+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host leon.node.gce-us-east1.admiral:27019 based on ismaster reply: { hosts: [ "ira.node.gce-us-east1.admiral:27019", "jasper.node.gce-us-west1.admiral:27019", "kratos.node.gce-europe-west3.admiral:27019", "leon.node.gce-us-east1.admiral:27019", "newton.node.gce-europe-west3.admiral:27019", "mateo.node.gce-us-west1.admiral:27019" ], setName: "sessions_config", setVersion: 6, ismaster: false, secondary: true, primary: "ira.node.gce-us-east1.admiral:27019", me: "leon.node.gce-us-east1.admiral:27019", lastWrite: { opTime: { ts: Timestamp(1547393912, 745), t: 1 }, lastWriteDate: new Date(1547393912000), majorityOpTime: { ts: Timestamp(1547393912, 556), t: 1 }, majorityWriteDate: new Date(1547393912000) }, configsvr: 2, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393912973), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393912, 745), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393912, 556), $clusterTime: { clusterTime: Timestamp(1547393912, 745), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:32 ivy mongos[27723]: 2019-01-13T15:38:32.996+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating leon.node.gce-us-east1.admiral:27019 lastWriteDate to 2019-01-13T15:38:32.000+0000 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:32.996+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating leon.node.gce-us-east1.admiral:27019 opTime to { ts: Timestamp(1547393912, 745), t: 1 } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:32.996+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_config took 366 msec Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:32.996+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_east1 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:32.997+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.035+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.035+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host phil.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "zeta.node.gce-us-east1.admiral:27017", "phil.node.gce-us-east1.admiral:27017", "bambi.node.gce-us-central1.admiral:27017" ], arbiters: [ "elrond.node.gce-us-west1.admiral:27017", "dale.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1", setVersion: 19, ismaster: true, secondary: false, primary: "phil.node.gce-us-east1.admiral:27017", me: "phil.node.gce-us-east1.admiral:27017", electionId: ObjectId('7fffffff0000000000000016'), lastWrite: { opTime: { ts: Timestamp(1547393913, 4), t: 22 }, lastWriteDate: new Date(1547393913000), majorityOpTime: { ts: Timestamp(1547393912, 691), t: 22 }, majorityWriteDate: new Date(1547393912000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393913011), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393913, 4), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000016') }, lastCommittedOpTime: Timestamp(1547393912, 691), $configServerState: { opTime: { ts: Timestamp(1547393912, 332), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393913, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.035+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating phil.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:38:33.000+0000 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.035+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating phil.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393913, 4), t: 22 } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.035+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.037+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.037+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host bambi.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "zeta.node.gce-us-east1.admiral:27017", "phil.node.gce-us-east1.admiral:27017", "bambi.node.gce-us-central1.admiral:27017" ], arbiters: [ "elrond.node.gce-us-west1.admiral:27017", "dale.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1", setVersion: 19, ismaster: false, secondary: true, primary: "phil.node.gce-us-east1.admiral:27017", me: "bambi.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393912, 769), t: 22 }, lastWriteDate: new Date(1547393912000), majorityOpTime: { ts: Timestamp(1547393912, 691), t: 22 }, majorityWriteDate: new Date(1547393912000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393913032), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393912, 769), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393912, 691), $configServerState: { opTime: { ts: Timestamp(1547393896, 689), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393912, 773), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.037+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating bambi.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:38:32.000+0000 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.037+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating bambi.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393912, 769), t: 22 } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.037+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.075+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.075+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host zeta.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "zeta.node.gce-us-east1.admiral:27017", "phil.node.gce-us-east1.admiral:27017", "bambi.node.gce-us-central1.admiral:27017" ], arbiters: [ "elrond.node.gce-us-west1.admiral:27017", "dale.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1", setVersion: 19, ismaster: false, secondary: true, primary: "phil.node.gce-us-east1.admiral:27017", me: "zeta.node.gce-us-east1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393913, 17), t: 22 }, lastWriteDate: new Date(1547393913000), majorityOpTime: { ts: Timestamp(1547393912, 740), t: 22 }, majorityWriteDate: new Date(1547393912000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393913051), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393913, 17), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393912, 740), $configServerState: { opTime: { ts: Timestamp(1547393909, 50), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393913, 25), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.075+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating zeta.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:38:33.000+0000 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.075+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating zeta.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393913, 17), t: 22 } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.075+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_east1 took 78 msec Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.075+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_central1 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.075+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.077+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.077+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host camden.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "percy.node.gce-us-central1.admiral:27017", "umbra.node.gce-us-west1.admiral:27017", "camden.node.gce-us-central1.admiral:27017" ], arbiters: [ "desmond.node.gce-us-east1.admiral:27017", "flint.node.gce-us-west1.admiral:27017" ], setName: "sessions_gce_us_central1", setVersion: 6, ismaster: true, secondary: false, primary: "camden.node.gce-us-central1.admiral:27017", me: "camden.node.gce-us-central1.admiral:27017", electionId: ObjectId('7fffffff0000000000000004'), lastWrite: { opTime: { ts: Timestamp(1547393913, 34), t: 4 }, lastWriteDate: new Date(1547393913000), majorityOpTime: { ts: Timestamp(1547393913, 3), t: 4 }, majorityWriteDate: new Date(1547393913000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393913072), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393913, 34), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000004') }, lastCommittedOpTime: Timestamp(1547393913, 3), $configServerState: { opTime: { ts: Timestamp(1547393912, 557), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393913, 34), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.077+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating camden.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:38:33.000+0000 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.077+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating camden.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393913, 34), t: 4 } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.077+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.078+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.078+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host percy.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "percy.node.gce-us-central1.admiral:27017", "umbra.node.gce-us-west1.admiral:27017", "camden.node.gce-us-central1.admiral:27017" ], arbiters: [ "desmond.node.gce-us-east1.admiral:27017", "flint.node.gce-us-west1.admiral:27017" ], setName: "sessions_gce_us_central1", setVersion: 6, ismaster: false, secondary: true, primary: "camden.node.gce-us-central1.admiral:27017", me: "percy.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393913, 30), t: 4 }, lastWriteDate: new Date(1547393913000), majorityOpTime: { ts: Timestamp(1547393913, 3), t: 4 }, majorityWriteDate: new Date(1547393913000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393913072), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393913, 30), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393913, 3), $configServerState: { opTime: { ts: Timestamp(1547393910, 42), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393913, 34), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.078+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating percy.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:38:33.000+0000 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.078+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating percy.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393913, 30), t: 4 } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.078+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.117+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.118+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host umbra.node.gce-us-west1.admiral:27017 based on ismaster reply: { hosts: [ "percy.node.gce-us-central1.admiral:27017", "umbra.node.gce-us-west1.admiral:27017", "camden.node.gce-us-central1.admiral:27017" ], arbiters: [ "desmond.node.gce-us-east1.admiral:27017", "flint.node.gce-us-west1.admiral:27017" ], setName: "sessions_gce_us_central1", setVersion: 6, ismaster: false, secondary: true, primary: "camden.node.gce-us-central1.admiral:27017", me: "umbra.node.gce-us-west1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393913, 21), t: 4 }, lastWriteDate: new Date(1547393913000), majorityOpTime: { ts: Timestamp(1547393912, 737), t: 4 }, majorityWriteDate: new Date(1547393912000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393913093), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393913, 21), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393912, 737), $configServerState: { opTime: { ts: Timestamp(1547393902, 658), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393913, 24), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.118+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating umbra.node.gce-us-west1.admiral:27017 lastWriteDate to 2019-01-13T15:38:33.000+0000 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.118+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating umbra.node.gce-us-west1.admiral:27017 opTime to { ts: Timestamp(1547393913, 21), t: 4 } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.118+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_central1 took 42 msec Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.118+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_west1 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.118+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.157+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.157+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host tony.node.gce-us-west1.admiral:27017 based on ismaster reply: { hosts: [ "william.node.gce-us-west1.admiral:27017", "tony.node.gce-us-west1.admiral:27017", "chloe.node.gce-us-central1.admiral:27017" ], arbiters: [ "sarah.node.gce-us-east1.admiral:27017", "gerda.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_west1", setVersion: 82625, ismaster: true, secondary: false, primary: "tony.node.gce-us-west1.admiral:27017", me: "tony.node.gce-us-west1.admiral:27017", electionId: ObjectId('7fffffff000000000000001c'), lastWrite: { opTime: { ts: Timestamp(1547393913, 74), t: 28 }, lastWriteDate: new Date(1547393913000), majorityOpTime: { ts: Timestamp(1547393913, 14), t: 28 }, majorityWriteDate: new Date(1547393913000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393913133), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393913, 74), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff000000000000001c') }, lastCommittedOpTime: Timestamp(1547393913, 14), $configServerState: { opTime: { ts: Timestamp(1547393912, 557), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393913, 74), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.157+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating tony.node.gce-us-west1.admiral:27017 lastWriteDate to 2019-01-13T15:38:33.000+0000 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.157+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating tony.node.gce-us-west1.admiral:27017 opTime to { ts: Timestamp(1547393913, 74), t: 28 } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.158+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.159+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.160+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host chloe.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "william.node.gce-us-west1.admiral:27017", "tony.node.gce-us-west1.admiral:27017", "chloe.node.gce-us-central1.admiral:27017" ], arbiters: [ "sarah.node.gce-us-east1.admiral:27017", "gerda.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_west1", setVersion: 82625, ismaster: false, secondary: true, primary: "tony.node.gce-us-west1.admiral:27017", me: "chloe.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393913, 61), t: 28 }, lastWriteDate: new Date(1547393913000), majorityOpTime: { ts: Timestamp(1547393913, 14), t: 28 }, majorityWriteDate: new Date(1547393913000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393913154), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393913, 61), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393913, 14), $configServerState: { opTime: { ts: Timestamp(1547393902, 82), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393913, 61), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.160+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating chloe.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:38:33.000+0000 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.160+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating chloe.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393913, 61), t: 28 } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.160+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.199+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.199+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host william.node.gce-us-west1.admiral:27017 based on ismaster reply: { hosts: [ "william.node.gce-us-west1.admiral:27017", "tony.node.gce-us-west1.admiral:27017", "chloe.node.gce-us-central1.admiral:27017" ], arbiters: [ "sarah.node.gce-us-east1.admiral:27017", "gerda.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_west1", setVersion: 82625, ismaster: false, secondary: true, primary: "tony.node.gce-us-west1.admiral:27017", me: "william.node.gce-us-west1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393913, 101), t: 28 }, lastWriteDate: new Date(1547393913000), majorityOpTime: { ts: Timestamp(1547393913, 38), t: 28 }, majorityWriteDate: new Date(1547393913000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393913175), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393913, 101), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393913, 38), $configServerState: { opTime: { ts: Timestamp(1547393894, 668), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393913, 102), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.199+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating william.node.gce-us-west1.admiral:27017 lastWriteDate to 2019-01-13T15:38:33.000+0000 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.199+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating william.node.gce-us-west1.admiral:27017 opTime to { ts: Timestamp(1547393913, 101), t: 28 } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.199+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_west1 took 81 msec Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.199+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_europe_west1 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.199+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.300+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.300+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host vivi.node.gce-europe-west1.admiral:27017 based on ismaster reply: { hosts: [ "vivi.node.gce-europe-west1.admiral:27017", "hilda.node.gce-europe-west2.admiral:27017" ], arbiters: [ "hubert.node.gce-europe-west3.admiral:27017" ], setName: "sessions_gce_europe_west1", setVersion: 4, ismaster: true, secondary: false, primary: "vivi.node.gce-europe-west1.admiral:27017", me: "vivi.node.gce-europe-west1.admiral:27017", electionId: ObjectId('7fffffff0000000000000009'), lastWrite: { opTime: { ts: Timestamp(1547393913, 122), t: 9 }, lastWriteDate: new Date(1547393913000), majorityOpTime: { ts: Timestamp(1547393913, 87), t: 9 }, majorityWriteDate: new Date(1547393913000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393913245), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393913, 122), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000009') }, lastCommittedOpTime: Timestamp(1547393913, 87), $configServerState: { opTime: { ts: Timestamp(1547393912, 745), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393913, 122), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.300+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating vivi.node.gce-europe-west1.admiral:27017 lastWriteDate to 2019-01-13T15:38:33.000+0000 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.300+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating vivi.node.gce-europe-west1.admiral:27017 opTime to { ts: Timestamp(1547393913, 122), t: 9 } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.300+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.395+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.395+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host hilda.node.gce-europe-west2.admiral:27017 based on ismaster reply: { hosts: [ "vivi.node.gce-europe-west1.admiral:27017", "hilda.node.gce-europe-west2.admiral:27017" ], arbiters: [ "hubert.node.gce-europe-west3.admiral:27017" ], setName: "sessions_gce_europe_west1", setVersion: 4, ismaster: false, secondary: true, primary: "vivi.node.gce-europe-west1.admiral:27017", me: "hilda.node.gce-europe-west2.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393913, 181), t: 9 }, lastWriteDate: new Date(1547393913000), majorityOpTime: { ts: Timestamp(1547393913, 181), t: 9 }, majorityWriteDate: new Date(1547393913000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393913344), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393913, 181), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000008') }, lastCommittedOpTime: Timestamp(1547393913, 181), $configServerState: { opTime: { ts: Timestamp(1547393907, 711), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393913, 192), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.395+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating hilda.node.gce-europe-west2.admiral:27017 lastWriteDate to 2019-01-13T15:38:33.000+0000 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.395+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating hilda.node.gce-europe-west2.admiral:27017 opTime to { ts: Timestamp(1547393913, 181), t: 9 } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.395+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_europe_west1 took 196 msec Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.396+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_europe_west2 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.396+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.491+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.491+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host ignis.node.gce-europe-west2.admiral:27017 based on ismaster reply: { hosts: [ "ignis.node.gce-europe-west2.admiral:27017", "keith.node.gce-europe-west3.admiral:27017" ], arbiters: [ "francis.node.gce-europe-west1.admiral:27017" ], setName: "sessions_gce_europe_west2", setVersion: 6, ismaster: true, secondary: false, primary: "ignis.node.gce-europe-west2.admiral:27017", me: "ignis.node.gce-europe-west2.admiral:27017", electionId: ObjectId('7fffffff0000000000000004'), lastWrite: { opTime: { ts: Timestamp(1547393913, 251), t: 4 }, lastWriteDate: new Date(1547393913000), majorityOpTime: { ts: Timestamp(1547393913, 198), t: 4 }, majorityWriteDate: new Date(1547393913000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393913439), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393913, 251), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000004') }, lastCommittedOpTime: Timestamp(1547393913, 198), $configServerState: { opTime: { ts: Timestamp(1547393913, 74), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393913, 251), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.491+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ignis.node.gce-europe-west2.admiral:27017 lastWriteDate to 2019-01-13T15:38:33.000+0000 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.491+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ignis.node.gce-europe-west2.admiral:27017 opTime to { ts: Timestamp(1547393913, 251), t: 4 } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.492+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.598+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.598+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host keith.node.gce-europe-west3.admiral:27017 based on ismaster reply: { hosts: [ "ignis.node.gce-europe-west2.admiral:27017", "keith.node.gce-europe-west3.admiral:27017" ], arbiters: [ "francis.node.gce-europe-west1.admiral:27017" ], setName: "sessions_gce_europe_west2", setVersion: 6, ismaster: false, secondary: true, primary: "ignis.node.gce-europe-west2.admiral:27017", me: "keith.node.gce-europe-west3.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393913, 291), t: 4 }, lastWriteDate: new Date(1547393913000), majorityOpTime: { ts: Timestamp(1547393913, 265), t: 4 }, majorityWriteDate: new Date(1547393913000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393913540), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393913, 291), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393913, 265), $configServerState: { opTime: { ts: Timestamp(1547393912, 627), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393913, 292), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.598+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating keith.node.gce-europe-west3.admiral:27017 lastWriteDate to 2019-01-13T15:38:33.000+0000 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.598+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating keith.node.gce-europe-west3.admiral:27017 opTime to { ts: Timestamp(1547393913, 291), t: 4 } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.598+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_europe_west2 took 202 msec Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.598+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_europe_west3 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.598+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.705+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.705+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host albert.node.gce-europe-west3.admiral:27017 based on ismaster reply: { hosts: [ "albert.node.gce-europe-west3.admiral:27017", "jordan.node.gce-europe-west1.admiral:27017" ], arbiters: [ "garry.node.gce-europe-west2.admiral:27017" ], setName: "sessions_gce_europe_west3", setVersion: 6, ismaster: true, secondary: false, primary: "albert.node.gce-europe-west3.admiral:27017", me: "albert.node.gce-europe-west3.admiral:27017", electionId: ObjectId('7fffffff000000000000000a'), lastWrite: { opTime: { ts: Timestamp(1547393913, 383), t: 10 }, lastWriteDate: new Date(1547393913000), majorityOpTime: { ts: Timestamp(1547393913, 357), t: 10 }, majorityWriteDate: new Date(1547393913000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393913647), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393913, 383), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff000000000000000a') }, lastCommittedOpTime: Timestamp(1547393913, 357), $configServerState: { opTime: { ts: Timestamp(1547393913, 74), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393913, 383), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.705+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating albert.node.gce-europe-west3.admiral:27017 lastWriteDate to 2019-01-13T15:38:33.000+0000 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.705+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating albert.node.gce-europe-west3.admiral:27017 opTime to { ts: Timestamp(1547393913, 383), t: 10 } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.705+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.806+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.806+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host jordan.node.gce-europe-west1.admiral:27017 based on ismaster reply: { hosts: [ "albert.node.gce-europe-west3.admiral:27017", "jordan.node.gce-europe-west1.admiral:27017" ], arbiters: [ "garry.node.gce-europe-west2.admiral:27017" ], setName: "sessions_gce_europe_west3", setVersion: 6, ismaster: false, secondary: true, primary: "albert.node.gce-europe-west3.admiral:27017", me: "jordan.node.gce-europe-west1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393913, 465), t: 10 }, lastWriteDate: new Date(1547393913000), majorityOpTime: { ts: Timestamp(1547393913, 460), t: 10 }, majorityWriteDate: new Date(1547393913000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393913751), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393913, 465), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000009') }, lastCommittedOpTime: Timestamp(1547393913, 460), $configServerState: { opTime: { ts: Timestamp(1547393888, 523), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393913, 491), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.806+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jordan.node.gce-europe-west1.admiral:27017 lastWriteDate to 2019-01-13T15:38:33.000+0000 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.806+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating jordan.node.gce-europe-west1.admiral:27017 opTime to { ts: Timestamp(1547393913, 465), t: 10 } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.806+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_europe_west3 took 207 msec Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.806+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Starting new refresh of replica set sessions_gce_us_east1_2 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.806+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.843+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.843+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host queen.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: true, secondary: false, primary: "queen.node.gce-us-east1.admiral:27017", me: "queen.node.gce-us-east1.admiral:27017", electionId: ObjectId('7fffffff0000000000000003'), lastWrite: { opTime: { ts: Timestamp(1547393913, 602), t: 3 }, lastWriteDate: new Date(1547393913000), majorityOpTime: { ts: Timestamp(1547393913, 549), t: 3 }, majorityWriteDate: new Date(1547393913000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393913823), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393913, 602), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000003') }, lastCommittedOpTime: Timestamp(1547393913, 549), $configServerState: { opTime: { ts: Timestamp(1547393913, 358), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393913, 607), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.843+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating queen.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:38:33.000+0000 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.843+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating queen.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393913, 602), t: 3 } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.844+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.881+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.881+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host april.node.gce-us-east1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: false, secondary: true, primary: "queen.node.gce-us-east1.admiral:27017", me: "april.node.gce-us-east1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393913, 624), t: 3 }, lastWriteDate: new Date(1547393913000), majorityOpTime: { ts: Timestamp(1547393913, 587), t: 3 }, majorityWriteDate: new Date(1547393913000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393913858), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393913, 624), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393913, 587), $configServerState: { opTime: { ts: Timestamp(1547393911, 148), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393913, 624), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.881+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating april.node.gce-us-east1.admiral:27017 lastWriteDate to 2019-01-13T15:38:33.000+0000 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.881+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating april.node.gce-us-east1.admiral:27017 opTime to { ts: Timestamp(1547393913, 624), t: 3 } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.881+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Compressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.883+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Decompressing message with snappy Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.883+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating host ralph.node.gce-us-central1.admiral:27017 based on ismaster reply: { hosts: [ "queen.node.gce-us-east1.admiral:27017", "ralph.node.gce-us-central1.admiral:27017", "april.node.gce-us-east1.admiral:27017" ], arbiters: [ "simon.node.gce-us-west1.admiral:27017", "edison.node.gce-us-central1.admiral:27017" ], setName: "sessions_gce_us_east1_2", setVersion: 4, ismaster: false, secondary: true, primary: "queen.node.gce-us-east1.admiral:27017", me: "ralph.node.gce-us-central1.admiral:27017", lastWrite: { opTime: { ts: Timestamp(1547393913, 601), t: 3 }, lastWriteDate: new Date(1547393913000), majorityOpTime: { ts: Timestamp(1547393913, 549), t: 3 }, majorityWriteDate: new Date(1547393913000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1547393913878), logicalSessionTimeoutMinutes: 30, minWireVersion: 7, maxWireVersion: 7, readOnly: false, compression: [ "snappy" ], ok: 1.0, operationTime: Timestamp(1547393913, 601), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393913, 549), $configServerState: { opTime: { ts: Timestamp(1547393900, 435), t: 1 } }, $clusterTime: { clusterTime: Timestamp(1547393913, 622), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.883+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ralph.node.gce-us-central1.admiral:27017 lastWriteDate to 2019-01-13T15:38:33.000+0000 Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.883+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Updating ralph.node.gce-us-central1.admiral:27017 opTime to { ts: Timestamp(1547393913, 601), t: 3 } Jan 13 15:38:33 ivy mongos[27723]: 2019-01-13T15:38:33.883+0000 D NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set sessions_gce_us_east1_2 took 77 msec Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.083+0000 D TRACKING [Uptime reporter] Cmd: NotSet, TrackingId: 5c3b5b7ba1824195fadc10f4 Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.083+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 224 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:39:05.083+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393915083), up: 204, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.083+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 224 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:39:05.083+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393915083), up: 204, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.083+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.083+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.083+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.083+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.282+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.282+0000 D ASIO [ShardRegistry] Request 224 finished with response: { n: 1, nModified: 1, opTime: { ts: Timestamp(1547393915, 25), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393915, 25), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393915, 26), t: 1 }, lastOpVisible: { ts: Timestamp(1547393915, 26), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393915, 25), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393915, 26), $clusterTime: { clusterTime: Timestamp(1547393915, 130), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.282+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ n: 1, nModified: 1, opTime: { ts: Timestamp(1547393915, 25), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393915, 25), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393915, 26), t: 1 }, lastOpVisible: { ts: Timestamp(1547393915, 26), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393915, 25), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393915, 26), $clusterTime: { clusterTime: Timestamp(1547393915, 130), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.282+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.282+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 225 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:39:05.282+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393915, 26), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.282+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 225 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:39:05.282+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393915, 26), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.282+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.282+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.282+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.282+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.319+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.319+0000 D ASIO [ShardRegistry] Request 225 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393915, 26), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393915, 26), t: 1 }, lastOpVisible: { ts: Timestamp(1547393915, 26), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393915, 25), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393915, 26), $clusterTime: { clusterTime: Timestamp(1547393915, 153), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.319+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393915, 26), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393915, 26), t: 1 }, lastOpVisible: { ts: Timestamp(1547393915, 26), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393915, 25), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393915, 26), $clusterTime: { clusterTime: Timestamp(1547393915, 153), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.319+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.319+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 226 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:39:05.319+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393915, 26), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.319+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 226 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:39:05.319+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393915, 26), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.319+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.319+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.319+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.319+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.356+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.356+0000 D ASIO [ShardRegistry] Request 226 finished with response: { cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393915, 67), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393915, 67), t: 1 }, lastOpVisible: { ts: Timestamp(1547393915, 67), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393915, 25), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393915, 67), $clusterTime: { clusterTime: Timestamp(1547393915, 177), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.356+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393915, 67), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393915, 67), t: 1 }, lastOpVisible: { ts: Timestamp(1547393915, 67), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393915, 25), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393915, 67), $clusterTime: { clusterTime: Timestamp(1547393915, 177), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.356+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.356+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 227 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:39:05.356+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393915, 67), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.356+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 227 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:39:05.356+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393915, 67), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.356+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.356+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.356+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.356+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.393+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.393+0000 D ASIO [ShardRegistry] Request 227 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393915, 67), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393915, 67), t: 1 }, lastOpVisible: { ts: Timestamp(1547393915, 67), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393915, 25), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393915, 67), $clusterTime: { clusterTime: Timestamp(1547393915, 177), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.393+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393915, 67), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393915, 67), t: 1 }, lastOpVisible: { ts: Timestamp(1547393915, 67), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393915, 25), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393915, 67), $clusterTime: { clusterTime: Timestamp(1547393915, 177), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:35 ivy mongos[27723]: 2019-01-13T15:38:35.393+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:43 ivy mongos[27723]: 2019-01-13T15:38:43.721+0000 D SHARDING [conn42] Command begin db: admin msg id: 219 Jan 13 15:38:43 ivy mongos[27723]: 2019-01-13T15:38:43.721+0000 D SHARDING [conn42] Command end db: admin msg id: 219 Jan 13 15:38:43 ivy mongos[27723]: 2019-01-13T15:38:43.721+0000 I COMMAND [conn42] command admin.$cmd command: ping { ping: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:178 protocol:op_query 0ms Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.223+0000 D SHARDING [conn42] Command begin db: admin msg id: 221 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.223+0000 D SHARDING [conn42] Command end db: admin msg id: 221 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.223+0000 I COMMAND [conn42] query admin.1 command: { buildInfo: "1", $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:1340 0ms Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.223+0000 D SHARDING [conn42] Command begin db: admin msg id: 223 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.223+0000 D NETWORK [conn42] Starting server-side compression negotiation Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.223+0000 D NETWORK [conn42] Compression negotiation not requested by client Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.223+0000 D SHARDING [conn42] Command end db: admin msg id: 223 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.223+0000 I COMMAND [conn42] command admin.$cmd command: isMaster { isMaster: 1, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:371 protocol:op_query 0ms Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.224+0000 D SHARDING [conn42] Command begin db: admin msg id: 225 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.224+0000 D SHARDING [conn42] Command end db: admin msg id: 225 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.224+0000 I COMMAND [conn42] command admin.$cmd command: serverStatus { serverStatus: 1, recordStats: 0, $db: "admin" } numYields:0 reslen:10255 protocol:op_query 0ms Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.225+0000 D SHARDING [conn42] Command begin db: config msg id: 227 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.225+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 228 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.225+0000 D ASIO [conn42] startCommand: RemoteCommand 228 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.225+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.225+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.225+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.225+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.263+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.263+0000 D ASIO [conn42] Request 228 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393924, 38), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393924, 38), $clusterTime: { clusterTime: Timestamp(1547393924, 151), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.263+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393924, 38), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393924, 38), $clusterTime: { clusterTime: Timestamp(1547393924, 151), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.263+0000 D SHARDING [conn42] Command end db: config msg id: 227 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.263+0000 I COMMAND [conn42] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 38ms Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.263+0000 D SHARDING [conn42] Command begin db: config msg id: 229 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.263+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b84a1824195fadc10fe Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.263+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 229 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.263+0000 D ASIO [conn42] startCommand: RemoteCommand 229 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.264+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.264+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.264+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.264+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.330+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.330+0000 D ASIO [ShardRegistry] Request 229 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393924, 38), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393924, 38), t: 1 }, lastOpVisible: { ts: Timestamp(1547393924, 38), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393915, 25), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393924, 38), $clusterTime: { clusterTime: Timestamp(1547393924, 226), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.330+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393924, 38), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393924, 38), t: 1 }, lastOpVisible: { ts: Timestamp(1547393924, 38), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393915, 25), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393924, 38), $clusterTime: { clusterTime: Timestamp(1547393924, 226), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.330+0000 D SHARDING [conn42] Command end db: config msg id: 229 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.330+0000 I COMMAND [conn42] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 66ms Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.330+0000 D SHARDING [conn42] Command begin db: config msg id: 231 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.330+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 230 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.330+0000 D ASIO [conn42] startCommand: RemoteCommand 230 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116" } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.330+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.330+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.330+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.331+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.367+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.367+0000 D ASIO [conn42] Request 230 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393924, 38), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393924, 38), $clusterTime: { clusterTime: Timestamp(1547393924, 309), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.367+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393924, 38), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393924, 38), $clusterTime: { clusterTime: Timestamp(1547393924, 309), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.367+0000 D SHARDING [conn42] Command end db: config msg id: 231 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.367+0000 I COMMAND [conn42] query config.settings command: { find: "settings", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:116", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:315 36ms Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.367+0000 D SHARDING [conn42] Command begin db: config msg id: 233 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.368+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b84a1824195fadc1101 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.368+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 231 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393324367) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.368+0000 D ASIO [conn42] startCommand: RemoteCommand 231 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393324367) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.368+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.368+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.368+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.368+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.432+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.432+0000 D ASIO [ShardRegistry] Request 231 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393924, 38), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393924, 38), t: 1 }, lastOpVisible: { ts: Timestamp(1547393924, 38), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393915, 25), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393924, 38), $clusterTime: { clusterTime: Timestamp(1547393924, 320), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.432+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.changelog" }, ok: 1.0, operationTime: Timestamp(1547393924, 38), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393924, 38), t: 1 }, lastOpVisible: { ts: Timestamp(1547393924, 38), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393915, 25), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393924, 38), $clusterTime: { clusterTime: Timestamp(1547393924, 320), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.432+0000 D SHARDING [conn42] Command end db: config msg id: 233 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.432+0000 I COMMAND [conn42] query config.changelog command: { aggregate: "changelog", pipeline: [ { $match: { time: { $gt: new Date(1547393324367) } } }, { $group: { _id: { event: "$what", note: "$details.note" }, count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:245 64ms Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.433+0000 D SHARDING [conn42] Command begin db: config msg id: 235 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.433+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 232 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.433+0000 D ASIO [conn42] startCommand: RemoteCommand 232 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "shards", comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92" } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.433+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.433+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.433+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.433+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.469+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.469+0000 D ASIO [conn42] Request 232 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393924, 38), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393924, 38), $clusterTime: { clusterTime: Timestamp(1547393924, 320), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.469+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_east1", host: "sessions_gce_us_east1/bambi.node.gce-us-central1.admiral:27017,phil.node.gce-us-east1.admiral:27017,zeta.node.gce-us-east1.admiral:27017", state: 1, tags: [ "gce-us-east1", "staging-gce-us-east1" ] }, { _id: "sessions_gce_us_central1", host: "sessions_gce_us_central1/camden.node.gce-us-central1.admiral:27017,percy.node.gce-us-central1.admiral:27017,umbra.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-central1" ] }, { _id: "sessions_gce_us_west1", host: "sessions_gce_us_west1/chloe.node.gce-us-central1.admiral:27017,tony.node.gce-us-west1.admiral:27017,william.node.gce-us-west1.admiral:27017", state: 1, tags: [ "gce-us-west1" ] }, { _id: "sessions_gce_europe_west1", host: "sessions_gce_europe_west1/hilda.node.gce-europe-west2.admiral:27017,vivi.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west1" ] }, { _id: "sessions_gce_europe_west2", host: "sessions_gce_europe_west2/ignis.node.gce-europe-west2.admiral:27017,keith.node.gce-europe-west3.admiral:27017", state: 1, tags: [ "gce-europe-west2" ] }, { _id: "sessions_gce_europe_west3", host: "sessions_gce_europe_west3/albert.node.gce-europe-west3.admiral:27017,jordan.node.gce-europe-west1.admiral:27017", state: 1, tags: [ "gce-europe-west3" ] }, { _id: "sessions_gce_us_east1_2", host: "sessions_gce_us_east1_2/april.node.gce-us-east1.admiral:27017,queen.node.gce-us-east1.admiral:27017,ralph.node.gce-us-central1.admiral:27017", state: 1, tags: [ "gce-us-east1" ] } ], id: 0, ns: "config.shards" }, ok: 1.0, operationTime: Timestamp(1547393924, 38), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393924, 38), $clusterTime: { clusterTime: Timestamp(1547393924, 320), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.469+0000 D SHARDING [conn42] Command end db: config msg id: 235 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.469+0000 I COMMAND [conn42] query config.shards command: { find: "shards", filter: {}, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_topology.go:92", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:1834 36ms Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.470+0000 D SHARDING [conn42] Command begin db: config msg id: 237 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.470+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 233 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.470+0000 D ASIO [conn42] startCommand: RemoteCommand 233 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "chunks", query: {}, allowImplicitCollectionCreation: false } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.470+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.470+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.470+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.470+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.506+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.506+0000 D ASIO [conn42] Request 233 finished with response: { n: 12766, ok: 1.0, operationTime: Timestamp(1547393924, 38), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393924, 38), $clusterTime: { clusterTime: Timestamp(1547393924, 408), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.506+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 12766, ok: 1.0, operationTime: Timestamp(1547393924, 38), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393924, 38), $clusterTime: { clusterTime: Timestamp(1547393924, 408), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.506+0000 D SHARDING [conn42] Command end db: config msg id: 237 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.506+0000 I COMMAND [conn42] query config.chunks command: { count: "chunks", query: {}, $db: "config" } numYields:0 reslen:210 36ms Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.507+0000 D SHARDING [conn42] Command begin db: config msg id: 239 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.507+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b84a1824195fadc1105 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.507+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 234 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.507+0000 D ASIO [conn42] startCommand: RemoteCommand 234 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.507+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.507+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.507+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.507+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.591+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.591+0000 D ASIO [ShardRegistry] Request 234 finished with response: { cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393924, 409), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393924, 38), t: 1 }, lastOpVisible: { ts: Timestamp(1547393924, 38), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393915, 25), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393924, 38), $clusterTime: { clusterTime: Timestamp(1547393924, 469), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.591+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "sessions_gce_us_central1", count: 1971 }, { _id: "sessions_gce_us_east1", count: 2081 }, { _id: "sessions_gce_europe_west1", count: 2037 }, { _id: "sessions_gce_us_west1", count: 3502 }, { _id: "sessions_gce_europe_west2", count: 361 }, { _id: "sessions_gce_us_east1_2", count: 2076 }, { _id: "sessions_gce_europe_west3", count: 738 } ], id: 0, ns: "config.chunks" }, ok: 1.0, operationTime: Timestamp(1547393924, 409), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393924, 38), t: 1 }, lastOpVisible: { ts: Timestamp(1547393924, 38), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393915, 25), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393924, 38), $clusterTime: { clusterTime: Timestamp(1547393924, 469), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.591+0000 D SHARDING [conn42] Command end db: config msg id: 239 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.591+0000 I COMMAND [conn42] query config.chunks command: { aggregate: "chunks", pipeline: [ { $group: { _id: "$shard", count: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:7 reslen:609 84ms Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.591+0000 D SHARDING [conn42] Command begin db: config msg id: 241 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.592+0000 D TRACKING [conn42] Cmd: aggregate, TrackingId: 5c3b5b84a1824195fadc1107 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.592+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 235 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.592+0000 D ASIO [conn42] startCommand: RemoteCommand 235 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], fromMongos: true, cursor: { batchSize: 101 } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.592+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.592+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.592+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.592+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.628+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.628+0000 D ASIO [ShardRegistry] Request 235 finished with response: { cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393924, 409), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393924, 38), t: 1 }, lastOpVisible: { ts: Timestamp(1547393924, 38), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393915, 25), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393924, 38), $clusterTime: { clusterTime: Timestamp(1547393924, 532), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.628+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: true, total: 1 } ], id: 0, ns: "config.databases" }, ok: 1.0, operationTime: Timestamp(1547393924, 409), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393924, 38), t: 1 }, lastOpVisible: { ts: Timestamp(1547393924, 38), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393915, 25), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393924, 38), $clusterTime: { clusterTime: Timestamp(1547393924, 532), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.629+0000 D SHARDING [conn42] Command end db: config msg id: 241 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.629+0000 I COMMAND [conn42] query config.databases command: { aggregate: "databases", pipeline: [ { $match: { _id: { $ne: "admin" } } }, { $group: { _id: "$partitioned", total: { $sum: 1 } } } ], cursor: {}, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:270 37ms Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.629+0000 D SHARDING [conn42] Command begin db: config msg id: 243 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.629+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 236 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.629+0000 D ASIO [conn42] startCommand: RemoteCommand 236 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ count: "collections", query: { dropped: false }, allowImplicitCollectionCreation: false } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.629+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.629+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.629+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.629+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.665+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.666+0000 D ASIO [conn42] Request 236 finished with response: { n: 3, ok: 1.0, operationTime: Timestamp(1547393924, 533), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393924, 409), $clusterTime: { clusterTime: Timestamp(1547393924, 533), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.666+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ n: 3, ok: 1.0, operationTime: Timestamp(1547393924, 533), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393924, 409), $clusterTime: { clusterTime: Timestamp(1547393924, 533), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.666+0000 D SHARDING [conn42] Command end db: config msg id: 243 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.666+0000 I COMMAND [conn42] query config.collections command: { count: "collections", query: { dropped: false }, $db: "config" } numYields:0 reslen:210 37ms Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.668+0000 D SHARDING [conn42] Command begin db: config msg id: 245 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.668+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 237 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393324667) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.668+0000 D ASIO [conn42] startCommand: RemoteCommand 237 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "mongos", filter: { ping: { $gte: new Date(1547393324667) } }, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96" } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.668+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.668+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.668+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.668+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.704+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.704+0000 D ASIO [conn42] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Request 237 finished with response: { cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393919866), up: 3487116, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393917743), up: 3433254, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393916802), up: 3487014, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393920001), up: 859, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393916966), up: 74863, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393924026), up: 74895, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393923340), up: 74869, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393920653), up: 74838, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393916965), up: 74835, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393917400), up: 74807, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.node.gce-us-eas Jan 13 15:38:44 ivy mongos[27723]: t1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393914886), up: 74777, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393920069), up: 74809, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393923751), up: 74786, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393918025), up: 74754, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393921423), up: 74757, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393919416), up: 74701, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393923124), up: 74734, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393917200), up: 74728, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393917953), up: 74699, waiting: true }, { _id: "jacob:27 .......... 5309, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393921969), up: 75272, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393920620), up: 75308, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393923050), up: 76069, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", Jan 13 15:38:44 ivy mongos[27723]: "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393919238), up: 76124, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393918530), up: 76125, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393917829), up: 76063, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393920399), up: 76656, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393920180), up: 76655, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393916742), up: 76592, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393916707), up: 76442, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393916739), up: 76592, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393916708), up: 76380, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393920660), up: 76446, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393916709), up: 76380, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393916708), up: 76254, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393918210), up: 76319, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393918210), Jan 13 15:38:44 ivy mongos[27723]: up: 76319, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393915083), up: 204, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393916709), up: 76193, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393915107), up: 76253, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393924, 562), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393924, 409), $clusterTime: { clusterTime: Timestamp(1547393924, 562), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.704+0000 D EXECUTOR [conn42] warning: log line attempted (10kB) over max size (10kB), printing beginning and end ... Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "francis:27018", advisoryHostFQDNs: [ "francis.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393919866), up: 3487116, waiting: true }, { _id: "garry:27018", advisoryHostFQDNs: [ "garry.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393917743), up: 3433254, waiting: true }, { _id: "hubert:27018", advisoryHostFQDNs: [ "hubert.11-e.ninja" ], mongoVersion: "4.0.4", ping: new Date(1547393916802), up: 3487014, waiting: true }, { _id: "elliot.11-e.ninja:27018", advisoryHostFQDNs: [ "elliot.11-e.ninja", "elliot.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393920001), up: 859, waiting: true }, { _id: "indy.11-e.ninja:27018", advisoryHostFQDNs: [ "indy.11-e.ninja", "indy.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393916966), up: 74863, waiting: true }, { _id: "fred.11-e.ninja:27018", advisoryHostFQDNs: [ "fred.11-e.ninja", "fred.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393924026), up: 74895, waiting: true }, { _id: "dylan.11-e.ninja:27018", advisoryHostFQDNs: [ "dylan.11-e.ninja", "dylan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393923340), up: 74869, waiting: true }, { _id: "trex.11-e.ninja:27018", advisoryHostFQDNs: [ "trex.11-e.ninja", "trex.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393920653), up: 74838, waiting: true }, { _id: "umar.11-e.ninja:27018", advisoryHostFQDNs: [ "umar.11-e.ninja", "umar.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393916965), up: 74835, waiting: true }, { _id: "logan.11-e.ninja:27018", advisoryHostFQDNs: [ "logan.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393917400), up: 74807, waiting: true }, { _id: "madie.11-e.ninja:27018", advisoryHostFQDNs: [ "madie.no Jan 13 15:38:44 ivy mongos[27723]: de.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393914886), up: 74777, waiting: true }, { _id: "xavier.11-e.ninja:27018", advisoryHostFQDNs: [ "xavier.11-e.ninja", "xavier.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393920069), up: 74809, waiting: true }, { _id: "ronald.11-e.ninja:27018", advisoryHostFQDNs: [ "ronald.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393923751), up: 74786, waiting: true }, { _id: "stella.11-e.ninja:27018", advisoryHostFQDNs: [ "stella.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393918025), up: 74754, waiting: true }, { _id: "tom.11-e.ninja:27018", advisoryHostFQDNs: [ "tom.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393921423), up: 74757, waiting: true }, { _id: "zaria.11-e.ninja:27018", advisoryHostFQDNs: [ "zaria.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393919416), up: 74701, waiting: true }, { _id: "walter.11-e.ninja:27018", advisoryHostFQDNs: [ "walter.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393923124), up: 74734, waiting: true }, { _id: "vanessa.11-e.ninja:27018", advisoryHostFQDNs: [ "vanessa.node.gce-us-east1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393917200), up: 74728, waiting: true }, { _id: "warren:27018", advisoryHostFQDNs: [ "warren.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393917953), up: 74699, waiting: true }, { _ .......... 5309, waiting: true }, { _id: "korra:27018", advisoryHostFQDNs: [ "korra.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393921969), up: 75272, waiting: true }, { _id: "tiger:27018", advisoryHostFQDNs: [ "tiger.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393920620), up: 75308, waiting: true }, { _id: "pat:27018", advisoryHostFQDNs: [ "pat.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393923050), up: 76069, waiting: true }, { _id: "urban.11-e.ninja:27018", advisoryHostFQDNs: [ "urban.11-e.ninja", Jan 13 15:38:44 ivy mongos[27723]: "urban.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393919238), up: 76124, waiting: true }, { _id: "taylor.11-e.ninja:27018", advisoryHostFQDNs: [ "taylor.11-e.ninja", "taylor.node.gce-europe-west1.admiral" ], mongoVersion: "4.0.5", ping: new Date(1547393918530), up: 76125, waiting: true }, { _id: "abe:27018", advisoryHostFQDNs: [ "abe.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393917829), up: 76063, waiting: true }, { _id: "diana:27018", advisoryHostFQDNs: [ "diana.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393920399), up: 76656, waiting: true }, { _id: "emil:27018", advisoryHostFQDNs: [ "emil.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393920180), up: 76655, waiting: true }, { _id: "frau:27018", advisoryHostFQDNs: [ "frau.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393916742), up: 76592, waiting: true }, { _id: "lisa:27018", advisoryHostFQDNs: [ "lisa.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393916707), up: 76442, waiting: true }, { _id: "daniel:27018", advisoryHostFQDNs: [ "daniel.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393916739), up: 76592, waiting: true }, { _id: "niles:27018", advisoryHostFQDNs: [ "niles.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393916708), up: 76380, waiting: true }, { _id: "mike:27018", advisoryHostFQDNs: [ "mike.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393920660), up: 76446, waiting: true }, { _id: "urma:27018", advisoryHostFQDNs: [ "urma.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393916709), up: 76380, waiting: true }, { _id: "leroy:27018", advisoryHostFQDNs: [ "leroy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393916708), up: 76254, waiting: true }, { _id: "vance:27018", advisoryHostFQDNs: [ "vance.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393918210), up: 76319, waiting: true }, { _id: "claire:27018", advisoryHostFQDNs: [ "claire.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393918210), Jan 13 15:38:44 ivy mongos[27723]: up: 76319, waiting: true }, { _id: "ivy:27018", advisoryHostFQDNs: [ "ivy.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393915083), up: 204, waiting: true }, { _id: "nero:27018", advisoryHostFQDNs: [ "nero.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393916709), up: 76193, waiting: true }, { _id: "mona:27018", advisoryHostFQDNs: [ "mona.11-e.ninja" ], mongoVersion: "4.0.5", ping: new Date(1547393915107), up: 76253, waiting: true } ], id: 0, ns: "config.mongos" }, ok: 1.0, operationTime: Timestamp(1547393924, 562), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393924, 409), $clusterTime: { clusterTime: Timestamp(1547393924, 562), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.706+0000 D SHARDING [conn42] Command end db: config msg id: 245 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.706+0000 I COMMAND [conn42] query config.mongos command: { find: "mongos", filter: { ping: { $gte: new Date(1547393324667) } }, skip: 0, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:96", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:63 reslen:9894 38ms Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.707+0000 D SHARDING [conn42] Command begin db: config msg id: 247 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.707+0000 D EXECUTOR [conn42] Scheduling remote command request: RemoteCommand 238 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.707+0000 D ASIO [conn42] startCommand: RemoteCommand 238 -- target:ira.node.gce-us-east1.admiral:27019 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, batchSize: 1, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105" } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.707+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.707+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.707+0000 D NETWORK [TaskExecutorPool-0] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.707+0000 D NETWORK [conn42] Compressing message with snappy Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.743+0000 D NETWORK [conn42] Decompressing message with snappy Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.744+0000 D ASIO [conn42] Request 238 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393924, 581), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393924, 409), $clusterTime: { clusterTime: Timestamp(1547393924, 581), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.744+0000 D EXECUTOR [conn42] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.locks" }, ok: 1.0, operationTime: Timestamp(1547393924, 581), $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393924, 409), $clusterTime: { clusterTime: Timestamp(1547393924, 581), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.744+0000 D SHARDING [conn42] Command end db: config msg id: 247 Jan 13 15:38:44 ivy mongos[27723]: 2019-01-13T15:38:44.744+0000 I COMMAND [conn42] query config.locks command: { find: "locks", filter: { _id: "balancer" }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, comment: "/opt/TeamCityBuild/work/rachel-1/143fe866c7e6f854/.deps/src/github.com/percona/mongodb_exporter/collector/mongos/sharding_status.go:105", $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:241 36ms Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.394+0000 D TRACKING [Uptime reporter] Cmd: NotSet, TrackingId: 5c3b5b85a1824195fadc110c Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.394+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 239 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:39:15.394+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393925393), up: 215, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.394+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 239 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:39:15.394+0000 cmd:{ update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "ivy:27018" }, u: { $set: { _id: "ivy:27018", ping: new Date(1547393925393), up: 215, waiting: true, mongoVersion: "4.0.5", advisoryHostFQDNs: [ "ivy.11-e.ninja" ] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000 } Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.394+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.394+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.394+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.394+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.573+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.573+0000 D ASIO [ShardRegistry] Request 239 finished with response: { n: 1, nModified: 1, opTime: { ts: Timestamp(1547393925, 291), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393925, 291), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393925, 291), t: 1 }, lastOpVisible: { ts: Timestamp(1547393925, 291), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393925, 291), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393925, 291), $clusterTime: { clusterTime: Timestamp(1547393925, 449), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.573+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ n: 1, nModified: 1, opTime: { ts: Timestamp(1547393925, 291), t: 1 }, electionId: ObjectId('7fffffff0000000000000001'), ok: 1.0, operationTime: Timestamp(1547393925, 291), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393925, 291), t: 1 }, lastOpVisible: { ts: Timestamp(1547393925, 291), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393925, 291), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393925, 291), $clusterTime: { clusterTime: Timestamp(1547393925, 449), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.573+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 240 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:39:15.573+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393925, 291), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.573+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 240 -- target:ira.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:39:15.573+0000 cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393925, 291), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.573+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.573+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.574+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.574+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.574+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.610+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.610+0000 D ASIO [ShardRegistry] Request 240 finished with response: { cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393925, 291), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393925, 291), t: 1 }, lastOpVisible: { ts: Timestamp(1547393925, 291), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393925, 291), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393925, 291), $clusterTime: { clusterTime: Timestamp(1547393925, 486), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.610+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "balancer", mode: "off", stopped: true, _secondaryThrottle: true } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393925, 291), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393925, 291), t: 1 }, lastOpVisible: { ts: Timestamp(1547393925, 291), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: -1 }, $gleStats: { lastOpTime: { ts: Timestamp(1547393925, 291), t: 1 }, electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1547393925, 291), $clusterTime: { clusterTime: Timestamp(1547393925, 486), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.610+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.610+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 241 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:39:15.610+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393925, 291), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.610+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 241 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:39:15.610+0000 cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393925, 291), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.610+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.610+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.610+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.610+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.648+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.648+0000 D ASIO [ShardRegistry] Request 241 finished with response: { cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393925, 423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393925, 423), t: 1 }, lastOpVisible: { ts: Timestamp(1547393925, 423), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393925, 423), $clusterTime: { clusterTime: Timestamp(1547393925, 486), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.648+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [ { _id: "chunksize", value: 256.0 } ], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393925, 423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393925, 423), t: 1 }, lastOpVisible: { ts: Timestamp(1547393925, 423), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393925, 423), $clusterTime: { clusterTime: Timestamp(1547393925, 486), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.649+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.649+0000 D EXECUTOR [Uptime reporter] Scheduling remote command request: RemoteCommand 242 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:39:15.649+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393925, 423), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.649+0000 D ASIO [Uptime reporter] startCommand: RemoteCommand 242 -- target:leon.node.gce-us-east1.admiral:27019 db:config expDate:2019-01-13T15:39:15.649+0000 cmd:{ find: "settings", filter: { _id: "autosplit" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1547393925, 423), t: 1 } }, limit: 1, maxTimeMS: 30000 } Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.649+0000 D NETWORK [ShardRegistry] Compressing message with snappy Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.649+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.649+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.649+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.685+0000 D NETWORK [ShardRegistry] Decompressing message with snappy Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.685+0000 D ASIO [ShardRegistry] Request 242 finished with response: { cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393925, 423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393925, 423), t: 1 }, lastOpVisible: { ts: Timestamp(1547393925, 423), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393925, 423), $clusterTime: { clusterTime: Timestamp(1547393925, 539), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.685+0000 D EXECUTOR [ShardRegistry] Received remote response: RemoteResponse -- cmd:{ cursor: { firstBatch: [], id: 0, ns: "config.settings" }, ok: 1.0, operationTime: Timestamp(1547393925, 423), $replData: { term: 1, lastOpCommitted: { ts: Timestamp(1547393925, 423), t: 1 }, lastOpVisible: { ts: Timestamp(1547393925, 423), t: 1 }, configVersion: 6, replicaSetId: ObjectId('5c002e88ad899acfb0bbfd1b'), primaryIndex: 0, syncSourceIndex: 0 }, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('000000000000000000000000') }, lastCommittedOpTime: Timestamp(1547393925, 423), $clusterTime: { clusterTime: Timestamp(1547393925, 539), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } Jan 13 15:38:45 ivy mongos[27723]: 2019-01-13T15:38:45.686+0000 D NETWORK [ShardRegistry] Timer received error: CallbackCanceled: Callback was canceled