[anush.chinoian@m1-prod-vm-db-mongo02 ~]$ sudo tail -100 /var/log/mongodb/mongod.log 2022-02-19T10:07:43.522+0300 I COMMAND [conn11] command audit_prod.2022-01-08 command: collStats { collStats: "2022-01-08", maxTimeMS: 30000, $readPreference: { mode: "secondaryPreferred" }, $db: "audit_prod" } numYields:0 reslen:76435 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 281930 } } protocol:op_query 288ms 2022-02-19T10:08:03.099+0300 I COMMAND [conn8090] command audit_prod appName: "mongodb_exporter" command: dbStats { dbStats: 1, scale: 1, lsid: { id: UUID("d5f73f8b-75ba-4a94-a56a-0099f5864f02") }, $clusterTime: { clusterTime: Timestamp(1645254482, 866), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "audit_prod", $readPreference: { mode: "primaryPreferred" } } numYields:0 reslen:396 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 640 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 118ms 2022-02-19T10:08:07.749+0300 I STORAGE [WTCheckpointThread] WiredTiger message [1645254487:749208][25721:0x7f73f11a9700], file:index-486-2866612318786486525.wt, WT_SESSION.checkpoint: Checkpoint has been running for 24 seconds and wrote: 10000 pages (831 MB) 2022-02-19T10:08:23.210+0300 I STORAGE [WTCheckpointThread] WiredTiger message [1645254503:210796][25721:0x7f73f11a9700], WT_SESSION.checkpoint: Checkpoint ran for 40 seconds and wrote: 49552 pages (2179 MB) 2022-02-19T10:09:23.555+0300 I COMMAND [conn11] command audit_prod.2020-12-29 command: collStats { collStats: "2020-12-29", maxTimeMS: 30000, $readPreference: { mode: "secondaryPreferred" }, $db: "audit_prod" } numYields:0 reslen:76461 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 113715 } } protocol:op_query 120ms 2022-02-19T10:09:33.085+0300 I COMMAND [conn8090] command audit_prod appName: "mongodb_exporter" command: dbStats { dbStats: 1, scale: 1, lsid: { id: UUID("d5f73f8b-75ba-4a94-a56a-0099f5864f02") }, $clusterTime: { clusterTime: Timestamp(1645254572, 2108), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "audit_prod", $readPreference: { mode: "primaryPreferred" } } numYields:0 reslen:396 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 640 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 104ms 2022-02-19T10:09:47.902+0300 I STORAGE [WTCheckpointThread] WiredTiger message [1645254587:902082][25721:0x7f73f11a9700], file:index-486-2866612318786486525.wt, WT_SESSION.checkpoint: Checkpoint has been running for 24 seconds and wrote: 10000 pages (840 MB) 2022-02-19T10:10:03.100+0300 I COMMAND [conn8090] command audit_prod appName: "mongodb_exporter" command: dbStats { dbStats: 1, scale: 1, lsid: { id: UUID("d5f73f8b-75ba-4a94-a56a-0099f5864f02") }, $clusterTime: { clusterTime: Timestamp(1645254602, 17), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "audit_prod", $readPreference: { mode: "primaryPreferred" } } numYields:0 reslen:396 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 640 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 118ms 2022-02-19T10:10:04.589+0300 I STORAGE [WTCheckpointThread] WiredTiger message [1645254604:589708][25721:0x7f73f11a9700], file:index-14516--6163443781866286986.wt, WT_SESSION.checkpoint: Checkpoint has been running for 41 seconds and wrote: 50000 pages (2106 MB) 2022-02-19T10:10:05.769+0300 I STORAGE [WTCheckpointThread] WiredTiger message [1645254605:769633][25721:0x7f73f11a9700], WT_SESSION.checkpoint: Checkpoint ran for 42 seconds and wrote: 50475 pages (2225 MB) 2022-02-19T10:10:10.031+0300 I NETWORK [LogicalSessionCacheReap] Starting new replica set monitor for replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:10:10.032+0300 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for replica01 is replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:10:10.130+0300 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:10:10.130+0300 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for replica01 is replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:10:10.132+0300 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:10:10.133+0300 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for replica01 is replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:10:10.141+0300 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:10:10.142+0300 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for replica01 is replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:10:10.142+0300 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:10:10.142+0300 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for replica01 is replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:11:06.234+0300 I COMMAND [conn8090] command audit_prod.2021-09-01 appName: "mongodb_exporter" command: collStats { collStats: "2021-09-01", scale: 1, lsid: { id: UUID("d5f73f8b-75ba-4a94-a56a-0099f5864f02") }, $clusterTime: { clusterTime: Timestamp(1645254665, 1473), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "audit_prod", $readPreference: { mode: "primaryPreferred" } } numYields:0 reslen:76469 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 336133 } } protocol:op_msg 341ms 2022-02-19T10:11:06.238+0300 I COMMAND [conn11] command audit_prod.2022-02-16 command: collStats { collStats: "2022-02-16", maxTimeMS: 30000, $readPreference: { mode: "secondaryPreferred" }, $db: "audit_prod" } numYields:0 reslen:76623 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 193318 } } protocol:op_query 199ms 2022-02-19T10:11:30.792+0300 I STORAGE [WTCheckpointThread] WiredTiger message [1645254690:792413][25721:0x7f73f11a9700], file:index-486-2866612318786486525.wt, WT_SESSION.checkpoint: Checkpoint has been running for 24 seconds and wrote: 10000 pages (847 MB) 2022-02-19T10:11:54.795+0300 I STORAGE [WTCheckpointThread] WiredTiger message [1645254714:795365][25721:0x7f73f11a9700], file:collection-14530--6163443781866286986.wt, WT_SESSION.checkpoint: Checkpoint has been running for 48 seconds and wrote: 50000 pages (2040 MB) 2022-02-19T10:11:56.118+0300 I STORAGE [WTCheckpointThread] WiredTiger message [1645254716:118253][25721:0x7f73f11a9700], WT_SESSION.checkpoint: Checkpoint ran for 50 seconds and wrote: 52307 pages (2303 MB) 2022-02-19T10:12:22.523+0300 I REPL [repl-writer-worker-15] applied op: CRUD { ...}, took 139ms 2022-02-19T10:12:44.782+0300 I STORAGE [WT-OplogTruncaterThread-local.oplog.rs] WiredTiger record store oplog truncation finished in: 126ms 2022-02-19T10:13:09.380+0300 I COMMAND [conn11] command audit_prod command: dbStats { dbStats: 1, maxTimeMS: 30000, $readPreference: { mode: "secondaryPreferred" }, $db: "audit_prod" } numYields:0 reslen:411 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 640 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_query 111ms 2022-02-19T10:13:21.626+0300 I STORAGE [WTCheckpointThread] WiredTiger message [1645254801:626884][25721:0x7f73f11a9700], file:index-486-2866612318786486525.wt, WT_SESSION.checkpoint: Checkpoint has been running for 25 seconds and wrote: 10000 pages (845 MB) 2022-02-19T10:13:35.919+0300 I COMMAND [conn7855] command audit_prod.2022-02-07 command: aggregate { aggregate: "2022-02-07", pipeline: [ { $group: { _id: { code: "$code", insurerId: "$args.object.insurerId" }, count: { $sum: 1 } } } ], cursor: {}, lsid: { id: UUID("adb09d01-3ea8-431b-9b5e-f025b3e9fff0") }, $clusterTime: { clusterTime: Timestamp(1645254421, 547), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "audit_prod", $readPreference: { mode: "primaryPreferred" } } planSummary: COLLSCAN cursorid:3738907206374756017 keysExamined:0 docsExamined:43097965 numYields:341353 nreturned:101 reslen:7434 locks:{ ReplicationStateTransition: { acquireCount: { w: 350685 } }, Global: { acquireCount: { r: 350685 } }, Database: { acquireCount: { r: 350685 } }, Collection: { acquireCount: { r: 350685 } }, Mutex: { acquireCount: { r: 9332 } } } storage:{ data: { bytesRead: 53152701453, timeReadingMicros: 68870793 } } protocol:op_msg 394198ms 2022-02-19T10:13:36.416+0300 I STORAGE [WTCheckpointThread] WiredTiger message [1645254816:416174][25721:0x7f73f11a9700], file:collection-14530--6163443781866286986.wt, WT_SESSION.checkpoint: Checkpoint has been running for 40 seconds and wrote: 50000 pages (2104 MB) 2022-02-19T10:13:37.784+0300 I STORAGE [WTCheckpointThread] WiredTiger message [1645254817:784795][25721:0x7f73f11a9700], WT_SESSION.checkpoint: Checkpoint ran for 41 seconds and wrote: 52221 pages (2334 MB) 2022-02-19T10:14:07.853+0300 I NETWORK [conn8088] end connection 127.0.0.1:51654 (21 connections now open) 2022-02-19T10:14:07.853+0300 I NETWORK [listener] connection accepted from 127.0.0.1:51664 #8094 (22 connections now open) 2022-02-19T10:14:07.854+0300 I NETWORK [conn8094] received client metadata from 127.0.0.1:51664 conn8094: { driver: { name: "mongo-go-driver", version: "v1.1.1" }, os: { type: "linux", architecture: "amd64" }, platform: "go1.12.9" } 2022-02-19T10:14:33.096+0300 I COMMAND [conn8090] command audit_prod appName: "mongodb_exporter" command: dbStats { dbStats: 1, scale: 1, lsid: { id: UUID("d5f73f8b-75ba-4a94-a56a-0099f5864f02") }, $clusterTime: { clusterTime: Timestamp(1645254872, 546), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "audit_prod", $readPreference: { mode: "primaryPreferred" } } numYields:0 reslen:396 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 640 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 116ms 2022-02-19T10:14:38.181+0300 I COMMAND [conn11] command audit_prod.2021-06-23 command: collStats { collStats: "2021-06-23", maxTimeMS: 30000, $readPreference: { mode: "secondaryPreferred" }, $db: "audit_prod" } numYields:0 reslen:76472 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 139945 } } protocol:op_query 145ms 2022-02-19T10:15:02.791+0300 I STORAGE [WTCheckpointThread] WiredTiger message [1645254902:791871][25721:0x7f73f11a9700], file:index-486-2866612318786486525.wt, WT_SESSION.checkpoint: Checkpoint has been running for 24 seconds and wrote: 10000 pages (854 MB) 2022-02-19T10:15:03.088+0300 I COMMAND [conn8090] command audit_prod appName: "mongodb_exporter" command: dbStats { dbStats: 1, scale: 1, lsid: { id: UUID("d5f73f8b-75ba-4a94-a56a-0099f5864f02") }, $clusterTime: { clusterTime: Timestamp(1645254902, 1041), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "audit_prod", $readPreference: { mode: "primaryPreferred" } } numYields:0 reslen:396 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 640 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 109ms 2022-02-19T10:15:10.031+0300 I NETWORK [LogicalSessionCacheReap] Starting new replica set monitor for replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:15:10.032+0300 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for replica01 is replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:15:10.132+0300 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:15:10.133+0300 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for replica01 is replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:15:10.133+0300 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:15:10.133+0300 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for replica01 is replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:15:10.145+0300 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:15:10.146+0300 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for replica01 is replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:15:10.146+0300 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:15:10.146+0300 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to m1-prod-vm-db-mongo02:27017 2022-02-19T10:15:10.146+0300 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to m1-prod-vm-db-mongo03:27017 2022-02-19T10:15:10.147+0300 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for replica01 is replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:15:10.147+0300 I NETWORK [listener] connection accepted from 10.4.126.111:39570 #8097 (23 connections now open) 2022-02-19T10:15:10.147+0300 I NETWORK [conn8097] received client metadata from 10.4.126.111:39570 conn8097: { driver: { name: "NetworkInterfaceTL", version: "4.2.2" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2022-02-19T10:15:26.343+0300 I STORAGE [WTCheckpointThread] WiredTiger message [1645254926:343531][25721:0x7f73f11a9700], file:index-14548--6163443781866286986.wt, WT_SESSION.checkpoint: Checkpoint has been running for 48 seconds and wrote: 50000 pages (2020 MB) 2022-02-19T10:15:33.079+0300 I COMMAND [conn8090] command audit_prod appName: "mongodb_exporter" command: dbStats { dbStats: 1, scale: 1, lsid: { id: UUID("d5f73f8b-75ba-4a94-a56a-0099f5864f02") }, $clusterTime: { clusterTime: Timestamp(1645254932, 580), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "audit_prod", $readPreference: { mode: "primaryPreferred" } } numYields:0 reslen:396 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 640 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 101ms 2022-02-19T10:15:35.184+0300 I STORAGE [WTCheckpointThread] WiredTiger message [1645254935:184784][25721:0x7f73f11a9700], WT_SESSION.checkpoint: Checkpoint ran for 57 seconds and wrote: 54359 pages (2311 MB) 2022-02-19T10:16:03.094+0300 I COMMAND [conn8090] command audit_prod appName: "mongodb_exporter" command: dbStats { dbStats: 1, scale: 1, lsid: { id: UUID("d5f73f8b-75ba-4a94-a56a-0099f5864f02") }, $clusterTime: { clusterTime: Timestamp(1645254962, 247), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "audit_prod", $readPreference: { mode: "primaryPreferred" } } numYields:0 reslen:396 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 640 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 112ms 2022-02-19T10:16:10.147+0300 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Ending idle connection to host m1-prod-vm-db-mongo03:27017 because the pool meets constraints; 1 connections to that host remain open 2022-02-19T10:16:10.147+0300 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Ending idle connection to host m1-prod-vm-db-mongo02:27017 because the pool meets constraints; 1 connections to that host remain open 2022-02-19T10:16:10.148+0300 I NETWORK [conn8097] end connection 10.4.126.111:39570 (22 connections now open) 2022-02-19T10:16:34.255+0300 I REPL [repl-writer-worker-11] applied op: CRUD {... }, took 121ms 2022-02-19T10:16:35.679+0300 I COMMAND [conn8090] command audit_prod.2021-08-02 appName: "mongodb_exporter" command: collStats { collStats: "2021-08-02", scale: 1, lsid: { id: UUID("d5f73f8b-75ba-4a94-a56a-0099f5864f02") }, $clusterTime: { clusterTime: Timestamp(1645254995, 548), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "audit_prod", $readPreference: { mode: "primaryPreferred" } } numYields:0 reslen:76458 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 374211 } } protocol:op_msg 380ms 2022-02-19T10:16:56.959+0300 I NETWORK [listener] connection accepted from 10.4.126.112:57664 #8098 (23 connections now open) 2022-02-19T10:16:56.959+0300 I NETWORK [conn8098] received client metadata from 10.4.126.112:57664 conn8098: { driver: { name: "NetworkInterfaceTL", version: "4.2.2" }, os: { type: "Linux", name: "Red Hat Enterprise Linux Server release 7.6 (Maipo)", architecture: "x86_64", version: "Kernel 3.10.0-957.27.2.el7.x86_64" } } 2022-02-19T10:17:00.643+0300 I REPL [repl-writer-worker-14] applied op: CRUD { ...}, took 131ms 2022-02-19T10:17:01.295+0300 I STORAGE [WTCheckpointThread] WiredTiger message [1645255021:295732][25721:0x7f73f11a9700], file:index-486-2866612318786486525.wt, WT_SESSION.checkpoint: Checkpoint has been running for 25 seconds and wrote: 10000 pages (864 MB) 2022-02-19T10:17:15.622+0300 I STORAGE [WTCheckpointThread] WiredTiger message [1645255035:622108][25721:0x7f73f11a9700], file:index-14536--6163443781866286986.wt, WT_SESSION.checkpoint: Checkpoint has been running for 40 seconds and wrote: 50000 pages (2036 MB) 2022-02-19T10:17:35.311+0300 I NETWORK [conn8090] end connection 10.4.126.9:57316 (22 connections now open) 2022-02-19T10:17:35.316+0300 I NETWORK [listener] connection accepted from 10.4.126.9:35086 #8099 (23 connections now open) 2022-02-19T10:17:35.316+0300 I NETWORK [conn8099] received client metadata from 10.4.126.9:35086 conn8099: { driver: { name: "mongo-go-driver", version: "v1.1.1" }, os: { type: "linux", architecture: "amd64" }, platform: "go1.13.1", application: { name: "mongodb_exporter" } } 2022-02-19T10:17:35.432+0300 I STORAGE [WTCheckpointThread] WiredTiger message [1645255055:432215][25721:0x7f73f11a9700], WT_SESSION.checkpoint: Checkpoint ran for 60 seconds and wrote: 55385 pages (2341 MB) 2022-02-19T10:17:56.960+0300 I NETWORK [conn8098] end connection 10.4.126.112:57664 (22 connections now open) 2022-02-19T10:18:03.087+0300 I COMMAND [conn8099] command audit_prod appName: "mongodb_exporter" command: dbStats { dbStats: 1, scale: 1, lsid: { id: UUID("d5f73f8b-75ba-4a94-a56a-0099f5864f02") }, $clusterTime: { clusterTime: Timestamp(1645255082, 76), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "audit_prod", $readPreference: { mode: "primaryPreferred" } } numYields:0 reslen:396 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 640 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 106ms 2022-02-19T10:18:33.085+0300 I COMMAND [conn8099] command audit_prod appName: "mongodb_exporter" command: dbStats { dbStats: 1, scale: 1, lsid: { id: UUID("d5f73f8b-75ba-4a94-a56a-0099f5864f02") }, $clusterTime: { clusterTime: Timestamp(1645255112, 985), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "audit_prod", $readPreference: { mode: "primaryPreferred" } } numYields:0 reslen:396 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 640 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 100ms 2022-02-19T10:18:35.910+0300 I COMMAND [conn8099] command audit_prod.2022-01-21 appName: "mongodb_exporter" command: collStats { collStats: "2022-01-21", scale: 1, lsid: { id: UUID("d5f73f8b-75ba-4a94-a56a-0099f5864f02") }, $clusterTime: { clusterTime: Timestamp(1645255115, 76), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "audit_prod", $readPreference: { mode: "primaryPreferred" } } numYields:0 reslen:76500 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 346628 } } protocol:op_msg 352ms 2022-02-19T10:18:37.222+0300 I REPL [repl-writer-worker-1] applied op: CRUD { ts: [ Timestamp(1645255117, 71), Timestamp(1645255117, 84), Timestamp(1645255117, 96) ], t: [ 976, 976, 976 ], o: [ { _id: "87d03f88-9071-44d7-8ef6-c5ce7eab8661", partitionId: 10, offset: 309129085, serviceName: "ms-integration-adapter-iv", dateTime: "19.02.2022 07:18:04.609", traceId: "2771120f207d66b3", spanId: "3b0c9950c772dbc0", code: "31002001", args: [...], ... 2022-02-19T10:19:00.737+0300 I STORAGE [WTCheckpointThread] WiredTiger message [1645255140:737567][25721:0x7f73f11a9700], file:index-486-2866612318786486525.wt, WT_SESSION.checkpoint: Checkpoint has been running for 25 seconds and wrote: 10000 pages (862 MB) 2022-02-19T10:19:29.450+0300 I STORAGE [WTCheckpointThread] WiredTiger message [1645255169:450014][25721:0x7f73f11a9700], file:index-14546--6163443781866286986.wt, WT_SESSION.checkpoint: Checkpoint has been running for 53 seconds and wrote: 55000 pages (2200 MB) 2022-02-19T10:19:35.771+0300 I STORAGE [WTCheckpointThread] WiredTiger message [1645255175:771735][25721:0x7f73f11a9700], WT_SESSION.checkpoint: Checkpoint ran for 60 seconds and wrote: 57273 pages (2423 MB) 2022-02-19T10:20:10.031+0300 I NETWORK [LogicalSessionCacheReap] Starting new replica set monitor for replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:20:10.031+0300 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for replica01 is replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:20:10.130+0300 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:20:10.131+0300 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for replica01 is replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:20:10.131+0300 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:20:10.132+0300 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for replica01 is replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:20:10.141+0300 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:20:10.141+0300 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for replica01 is replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:20:10.141+0300 I NETWORK [LogicalSessionCacheRefresh] Starting new replica set monitor for replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:20:10.141+0300 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to m1-prod-vm-db-mongo03:27017 2022-02-19T10:20:10.142+0300 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for replica01 is replica01/m1-prod-vm-db-mongo01:27017,m1-prod-vm-db-mongo02:27017,m1-prod-vm-db-mongo03:27017 2022-02-19T10:20:23.457+0300 I REPL [repl-writer-worker-2] applied op: CRUD { ts: Timestamp(1645255223, 368), t: 976, h: 0, v: 2, op: "i", ns: "audit_prod.2022-02-19", ui: UUID("77aff01f-127f-4a97-b745-f42d474972c2"), wall: new Date(1645255223338), o: {...}, took 113ms 2022-02-19T10:20:36.125+0300 I COMMAND [conn8099] command audit_prod.2021-04-17 appName: "mongodb_exporter" command: collStats { collStats: "2021-04-17", scale: 1, lsid: { id: UUID("d5f73f8b-75ba-4a94-a56a-0099f5864f02") }, $clusterTime: { clusterTime: Timestamp(1645255235, 567), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "audit_prod", $readPreference: { mode: "primaryPreferred" } } numYields:0 reslen:76446 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 264826 } } protocol:op_msg 270ms 2022-02-19T10:20:58.685+0300 F REPL [repl-writer-worker-6] writer worker caught exception: NamespaceNotFound: Failed to apply operation: { ts: Timestamp(1645255258, 115), t: 976, h: 0, v: 2, op: "u", ns: "audit_prod.2021-12-24", ui: UUID("f5082217-8ce6-4c08-b7b3-1bd06b15822d"), o2: { _id: "917626ac-3ae8-4568-b387-df2e405153b1" }, wall: new Date(1645255258567), o: { _id: "917626ac-3ae8-4568-b387-df2e405153b1", partitionId: 38, offset: 240893546, serviceName: "ms-mdm-data", dateTime: "24.12.2021 00:00:15.980", traceId: "e7280bc3f6393171", spanId: "3270c1d5e9275e07", code: "32000001", args: [ ...], ...} :: caused by :: Unable to resolve f5082217-8ce6-4c08-b7b3-1bd06b15822d on: { op: "u", ns: "audit_prod.2021-12-24", ui: UUID("f5082217-8ce6-4c08-b7b3-1bd06b15822d"), o: { _id: "917626ac-3ae8-4568-b387-df2e405153b1", partitionId: 38, offset: 240893546, serviceName: "ms-mdm-data", dateTime: "24.12.2021 00:00:15.980", traceId: "e7280bc3f6393171", spanId: "3270c1d5e9275e07", code: "32000001", args: [...], ... }, o2: { _id: "917626ac-3ae8-4568-b387-df2e405153b1" }, ts: Timestamp(1645255258, 115), t: 976, h: 0, v: 2, wall: new Date(1645255258567) } 2022-02-19T10:20:58.692+0300 F REPL [rsSync-0] Failed to apply batch of operations. Number of operations in batch: 1. First operation: { op: "u", ns: "audit_prod.2021-12-24", ui: UUID("f5082217-8ce6-4c08-b7b3-1bd06b15822d"), o: { _id: "917626ac-3ae8-4568-b387-df2e405153b1", partitionId: 38, offset: 240893546, serviceName: "ms-mdm-data", dateTime: "24.12.2021 00:00:15.980", traceId: "e7280bc3f6393171", spanId: "3270c1d5e9275e07", code: "32000001", args: [ ...], ... }, o2: { _id: "917626ac-3ae8-4568-b387-df2e405153b1" }, ts: Timestamp(1645255258, 115), t: 976, h: 0, v: 2, wall: new Date(1645255258567) }. Last operation: { op: "u", ns: "audit_prod.2021-12-24", ui: UUID("f5082217-8ce6-4c08-b7b3-1bd06b15822d"), o: { _id: "917626ac-3ae8-4568-b387-df2e405153b1", partitionId: 38, offset: 240893546, serviceName: "ms-mdm-data", dateTime: "24.12.2021 00:00:15.980", traceId: "e7280bc3f6393171", spanId: "3270c1d5e9275e07", code: "32000001", args: [...],...}, o2: { _id: "917626ac-3ae8-4568-b387-df2e405153b1" }, ts: Timestamp(1645255258, 115), t: 976, h: 0, v: 2, wall: new Date(1645255258567) }. Oplog application failed in writer thread 5: NamespaceNotFound: Failed to apply operation: { ts: Timestamp(1645255258, 115), t: 976, h: 0, v: 2, op: "u", ns: "audit_prod.2021-12-24", ui: UUID("f5082217-8ce6-4c08-b7b3-1bd06b15822d"), o2: { _id: "917626ac-3ae8-4568-b387-df2e405153b1" }, wall: new Date(1645255258567), o: { _id: "917626ac-3ae8-4568-b387-df2e405153b1", partitionId: 38, offset: 240893546, serviceName: "ms-mdm-data", dateTime: "24.12.2021 00:00:15.980", traceId: "e7280bc3f6393171", spanId: "3270c1d5e9275e07", code: "32000001",...... } } :: caused by :: Unable to resolve f5082217-8ce6-4c08-b7b3-1bd06b15822d 2022-02-19T10:20:58.693+0300 F - [rsSync-0] Fatal assertion 34437 NamespaceNotFound: Failed to apply operation: { ts: Timestamp(1645255258, 115), t: 976, h: 0, v: 2, op: "u", ns: "audit_prod.2021-12-24", ui: UUID("f5082217-8ce6-4c08-b7b3-1bd06b15822d"), o2: { _id: "917626ac-3ae8-4568-b387-df2e405153b1" }, wall: new Date(1645255258567), o: { _id: "917626ac-3ae8-4568-b387-df2e405153b1", partitionId: 38, offset: 240893546, serviceName: "ms-mdm-data", dateTime: "24.12.2021 00:00:15.980", traceId: "e7280bc3f6393171", spanId: "3270c1d5e9275e07", code: "32000001", args: [...], ...} :: caused by :: Unable to resolve f5082217-8ce6-4c08-b7b3-1bd06b15822d at src/mongo/db/repl/sync_tail.cpp 835 2022-02-19T10:20:58.693+0300 F - [rsSync-0] ***aborting after fassert() failure