[SERVER-12413] Assertion on config servers Created: 20/Jan/14  Updated: 29/Sep/15  Resolved: 29/Sep/15

Status: Closed
Project: Core Server
Component/s: Replication
Affects Version/s: 2.4.8
Fix Version/s: None

Type: Bug Priority: Critical - P2
Reporter: igor lasic Assignee: Bruce Lucas (Inactive)
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

centos, vmware, nexgen


Attachments: File mongod.log.gz    
Operating System: ALL
Steps To Reproduce:

had major san outages and this started showing.

Participants:

 Description   

Mon Jan 20 16:35:48.110 [conn2699] update config.mongos query:

{ _id: "render-mu04.colo:27017" }

update: { $set:

{ ping: new Date(1390253748104), up: 111620, waiting: true, mongoVersion: "2.4.8" }

} idhack:1 fastmod:1 keyUpdates:0 exception: assertion src/mongo/db/pdfile.cpp:1816 locks(micros) w:17346 8ms
Mon Jan 20 16:35:48.149 [conn2707] end connection 10.84.150.185:53977 (20 connections now open)
Mon Jan 20 16:35:48.286 [conn2709] local.oplog.$main Assertion failure !loc.isNull() src/mongo/db/pdfile.cpp 1816
0xde05e1 0xda15bd 0xab95c1 0xa72524 0xa6b229 0xa8e021 0xa914e5 0xa93847 0x9f6b78 0x9fc0f8 0x6e83a8 0xdccbae 0x3ef9407851 0x3ef90e894d
/usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xde05e1]
/usr/bin/mongod(_ZN5mongo12verifyFailedEPKcS1_j+0xfd) [0xda15bd]
/usr/bin/mongod(_ZN5mongo11DataFileMgr17fast_oplog_insertEPNS_16NamespaceDetailsEPKci+0x511) [0xab95c1]
/usr/bin/mongod() [0xa72524]
/usr/bin/mongod(_ZN5mongo5logOpEPKcS1_RKNS_7BSONObjEPS2_Pbb+0x49) [0xa6b229]
/usr/bin/mongod() [0xa8e021]
/usr/bin/mongod(_ZN5mongo14_updateObjectsEbPKcRKNS_7BSONObjES4_bbbRNS_7OpDebugEPNS_11RemoveSaverEbRKNS_24QueryPlanSelectionPolicyEb+0x2d35) [0xa914e5]
/usr/bin/mongod(_ZN5mongo13updateObjectsEPKcRKNS_7BSONObjES4_bbbRNS_7OpDebugEbRKNS_24QueryPlanSelectionPolicyE+0xb7) [0xa93847]
/usr/bin/mongod(_ZN5mongo14receivedUpdateERNS_7MessageERNS_5CurOpE+0x4d8) [0x9f6b78]
/usr/bin/mongod(_ZN5mongo16assembleResponseERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE+0xac8) [0x9fc0f8]
/usr/bin/mongod(_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x98) [0x6e83a8]
/usr/bin/mongod(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x42e) [0xdccbae]
/lib64/libpthread.so.0() [0x3ef9407851]
/lib64/libc.so.6(clone+0x6d) [0x3ef90e894d]
Mon Jan 20 16:35:48.294 [conn2709] update config.mongos query:

{ _id: "render-mu02.colo:27017" }

update: { $set:

{ ping: new Date(1390253748288), up: 111618, waiting: false, mongoVersion: "2.4.8" }

} idhack:1 fastmod:1 keyUpdates:0 exception: assertion src/mongo/db/pdfile.cpp:1816 locks(micros) w:17495 8ms
Mon Jan 20 16:35:48.296 [conn2709] local.oplog.$main Assertion failure !loc.isNull() src/mongo/db/pdfile.cpp 1816
0xde05e1 0xda15bd 0xab95c1 0xa72524 0xa6b229 0xa8e021 0xa914e5 0xa93847 0x9f6b78 0x9fc0f8 0x6e83a8 0xdccbae 0x3ef9407851 0x3ef90e894d
/usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xde05e1]
/usr/bin/mongod(_ZN5mongo12verifyFailedEPKcS1_j+0xfd) [0xda15bd]
/usr/bin/mongod(_ZN5mongo11DataFileMgr17fast_oplog_insertEPNS_16NamespaceDetailsEPKci+0x511) [0xab95c1]
/usr/bin/mongod() [0xa72524]
/usr/bin/mongod(_ZN5mongo5logOpEPKcS1_RKNS_7BSONObjEPS2_Pbb+0x49) [0xa6b229]
/usr/bin/mongod() [0xa8e021]
/usr/bin/mongod(_ZN5mongo14_updateObjectsEbPKcRKNS_7BSONObjES4_bbbRNS_7OpDebugEPNS_11RemoveSaverEbRKNS_24QueryPlanSelectionPolicyEb+0x2d35) [0xa914e5]
/usr/bin/mongod(_ZN5mongo13updateObjectsEPKcRKNS_7BSONObjES4_bbbRNS_7OpDebugEbRKNS_24QueryPlanSelectionPolicyE+0xb7) [0xa93847]
/usr/bin/mongod(_ZN5mongo14receivedUpdateERNS_7MessageERNS_5CurOpE+0x4d8) [0x9f6b78]
/usr/bin/mongod(_ZN5mongo16assembleResponseERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE+0xac8) [0x9fc0f8]
/usr/bin/mongod(_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x98) [0x6e83a8]
/usr/bin/mongod(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x42e) [0xdccbae]
/lib64/libpthread.so.0() [0x3ef9407851]
/lib64/libc.so.6(clone+0x6d) [0x3ef90e894d]
Mon Jan 20 16:35:48.305 [conn2709] update config.mongos query:

{ _id: "render-mu02.colo:27017" }

update: { $set:

{ ping: new Date(1390253748298), up: 111618, waiting: true, mongoVersion: "2.4.8" }

} idhack:1 fastmod:1 keyUpdates:0 exception: assertion src/mongo/db/pdfile.cpp:1816 locks(micros) w:17230 8ms
Mon Jan 20 16:35:49.004 [conn2709] local.oplog.$main Assertion failure !loc.isNull() src/mongo/db/pdfile.cpp 1816
0xde05e1 0xda15bd 0xab95c1 0xa72524 0xa6b229 0xa90f18 0xa93847 0x9f6b78 0x9fc0f8 0x6e83a8 0xdccbae 0x3ef9407851 0x3ef90e894d
/usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xde05e1]
/usr/bin/mongod(_ZN5mongo12verifyFailedEPKcS1_j+0xfd) [0xda15bd]
/usr/bin/mongod(_ZN5mongo11DataFileMgr17fast_oplog_insertEPNS_16NamespaceDetailsEPKci+0x511) [0xab95c1]
/usr/bin/mongod() [0xa72524]
/usr/bin/mongod(_ZN5mongo5logOpEPKcS1_RKNS_7BSONObjEPS2_Pbb+0x49) [0xa6b229]
/usr/bin/mongod(_ZN5mongo14_updateObjectsEbPKcRKNS_7BSONObjES4_bbbRNS_7OpDebugEPNS_11RemoveSaverEbRKNS_24QueryPlanSelectionPolicyEb+0x2768) [0xa90f18]
/usr/bin/mongod(_ZN5mongo13updateObjectsEPKcRKNS_7BSONObjES4_bbbRNS_7OpDebugEbRKNS_24QueryPlanSelectionPolicyE+0xb7) [0xa93847]
/usr/bin/mongod(_ZN5mongo14receivedUpdateERNS_7MessageERNS_5CurOpE+0x4d8) [0x9f6b78]
/usr/bin/mongod(_ZN5mongo16assembleResponseERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE+0xac8) [0x9fc0f8]
/usr/bin/mongod(_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x98) [0x6e83a8]
/usr/bin/mongod(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x42e) [0xdccbae]
/lib64/libpthread.so.0() [0x3ef9407851]
/lib64/libc.so.6(clone+0x6d) [0x3ef90e894d]
Mon Jan 20 16:35:49.012 [conn2709] update config.lockpings query:

{ _id: "render-mu02.colo:27017:1390142130:1804289383" }

update: { $set:

{ ping: new Date(1390253748931) }

} nscanned:1 keyUpdates:1 exception: assertion src/mongo/db/pdfile.cpp:1816 locks(micros) w:16427 8ms
Mon Jan 20 16:35:49.074 [conn2709] end connection 10.84.150.182:40943 (19 connections now open)
Mon Jan 20 16:35:50.544 [replmaster] local.oplog.$main Assertion failure !loc.isNull() src/mongo/db/pdfile.cpp 1816
0xde05e1 0xda15bd 0xab95c1 0xa72524 0xa6b872 0xb8d045 0xe28e69 0x3ef9407851 0x3ef90e894d
/usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xde05e1]



 Comments   
Comment by Bruce Lucas (Inactive) [ 21/Jan/14 ]

Hi Igor,

Glad to hear things are working now. Very happy to have been of assistance.

Bruce

Comment by igor lasic [ 21/Jan/14 ]

Copied the survirving configuration around. Restarted.

So far so good.

Closing.

Thank you for your help.

Comment by Bruce Lucas (Inactive) [ 21/Jan/14 ]

Yes, just stop that config server (to make sure the database files are static), and copy that data, and replicate it to the other two config servers.

Bruce

Comment by igor lasic [ 21/Jan/14 ]

Definitely San related

I am following config server restore instructions
One config server out of 3 survived w/o failures

Should i copy the data of the surviving one around or is there a different
recommended procedure?

Comment by Bruce Lucas (Inactive) [ 21/Jan/14 ]

Hi Igor,

Thanks for uploading the log. It looks like there is corruption in one of the files of the local database on the config server. I think it's reasonable to assume that it was caused by a storage error related to the SAN outage; what was the timing of that? The first signs of trouble in the log is the following entry at 12:50:

Sun Jan 19 12:50:30.045 [conn40] couldn't make room for new record (len: 184) in capped ns local.oplog.$main
  Extent 0 (capExtent)
    magic: 41424344 extent->ns: local.oplog.$main
    fr: null lr: 0:ce94b8 extent->len: 5242880
 local.oplog.$main Assertion failure len * 5 > lastExtentSize src/mongo/db/namespace_details.cpp 457

However, because the local.oplog.$main collection is a circular capped collection, it is possible that the corruption occurred some time earlier and was only seen by mongod when the collection wrapped around to the corrupted region.

To recover from this you can reinitialize that config server, after ensuring that the storage is working normally, following the same procedure as for replacing a config server. Before doing that you should be sure do run any relevant hardware diagnostics and fsck the filesystem.

If you would like us to investigate and look for more definitive evidence that this corruption was caused by the SAN outage, before recovering that config server please take the following steps:

  • Upload all currently available kernel logs (dmesg and syslog) from the affected node, including whatever older rotated logs might be available; these might have evidence of the specific problem associated with the corruption. For reference here are some sample commands for collecting those logs:

    dmesg >/tmp/dmesg
    tar czf /tmp/system-logs.tgz /var/log/{messages*,syslog*,dmesg*} /tmp/dmesg

  • Upload the local.* database files from the affected node. You can just shut down that config server and then use e.g. tar czf /tmp/local.tgz local.* in the database directory for that mongod to collect the appropriate files.

If the resulting two files are less than 150MB and you are comfortable with them being publicly visible you can attach them to this ticket; if they are too large or you would like to keep them private we can provide a secure private location for you to upload them.

Thanks,
Bruce

Comment by igor lasic [ 21/Jan/14 ]

one of the config servers log.

Errors start around Jan 19 11:00

First error below.

Sun Jan 19 11:17:21.750 [conn60] end connection 10.84.150.52:43273 (27 connections now open)
Sun Jan 19 11:17:35.214 [conn59] end connection 10.84.150.51:50992 (26 connections now open)
Sun Jan 19 11:17:47.547 [conn55] end connection 10.84.150.103:48016 (25 connections now open)
Sun Jan 19 11:17:48.612 [conn61] end connection 10.84.150.53:49020 (24 connections now open)
Sun Jan 19 11:17:59.122 [conn62] end connection 10.84.150.54:42804 (23 connections now open)
Sun Jan 19 11:29:35.317 [conn63] end connection 10.84.150.153:54228 (22 connections now open)
Sun Jan 19 11:29:35.318 [initandlisten] connection accepted from 10.84.150.153:54943 #64 (23 connections now open)
Sun Jan 19 11:29:52.674 [conn28] update config.mongos query:

{ _id: "web01.healthmetrics.org:27017" }

update: { $set:

{ ping: new Date(1390148992191), up: 6901, waiting: false, mongoVersion: "2.4.8" }

} idhack:1 nupdated:1 fastmod:1 keyUpdates:0 locks(micros) w:958524 479ms
Sun Jan 19 11:56:37.642 [conn64] end connection 10.84.150.153:54943 (22 connections now open)
Sun Jan 19 11:56:37.643 [initandlisten] connection accepted from 10.84.150.153:55656 #65 (23 connections now open)
Sun Jan 19 12:23:40.026 [conn65] end connection 10.84.150.153:55656 (22 connections now open)
Sun Jan 19 12:23:40.028 [initandlisten] connection accepted from 10.84.150.153:56374 #66 (23 connections now open)
Sun Jan 19 12:50:30.045 [conn40] couldn't make room for new record (len: 184) in capped ns local.oplog.$main
Extent 0 (capExtent)
magic: 41424344 extent->ns: local.oplog.$main
fr: null lr: 0:ce94b8 extent->len: 5242880
local.oplog.$main Assertion failure len * 5 > lastExtentSize src/mongo/db/namespace_details.cpp 457
0xde05e1 0xda15bd 0xa5e4d8 0x81acaa 0xa5fe39 0xa5fe7c 0xab929c 0xa72524 0xa6b229 0xa8e021 0xa914e5 0xa93847 0x9f6b78 0x9fc0f8 0x6e83a8 0xdccbae 0x3ef9407851 0x3ef90e894d
/usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xde05e1]
/usr/bin/mongod(_ZN5mongo12verifyFailedEPKcS1_j+0xfd) [0xda15bd]
/usr/bin/mongod(_ZNK5mongo16NamespaceDetails13maybeComplainEPKci+0x928) [0xa5e4d8]
/usr/bin/mongod(_ZN5mongo16NamespaceDetails11cappedAllocEPKci+0xa8a) [0x81acaa]
/usr/bin/mongod(_ZN5mongo16NamespaceDetails6_allocEPKci+0x29) [0xa5fe39]
/usr/bin/mongod(_ZN5mongo16NamespaceDetails5allocEPKci+0x3c) [0xa5fe7c]
/usr/bin/mongod(_ZN5mongo11DataFileMgr17fast_oplog_insertEPNS_16NamespaceDetailsEPKci+0x1ec) [0xab929c]
/usr/bin/mongod() [0xa72524]
/usr/bin/mongod(_ZN5mongo5logOpEPKcS1_RKNS_7BSONObjEPS2_Pbb+0x49) [0xa6b229]
/usr/bin/mongod() [0xa8e021]
/usr/bin/mongod(_ZN5mongo14_updateObjectsEbPKcRKNS_7BSONObjES4_bbbRNS_7OpDebugEPNS_11RemoveSaverEbRKNS_24QueryPlanSelectionPolicyEb+0x2d35) [0xa914e5]
/usr/bin/mongod(_ZN5mongo13updateObjectsEPKcRKNS_7BSONObjES4_bbbRNS_7OpDebugEbRKNS_24QueryPlanSelectionPolicyE+0xb7) [0xa93847]
/usr/bin/mongod(_ZN5mongo14receivedUpdateERNS_7MessageERNS_5CurOpE+0x4d8) [0x9f6b78]
/usr/bin/mongod(_ZN5mongo16assembleResponseERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE+0xac8) [0x9fc0f8]

Comment by Daniel Pasette (Inactive) [ 21/Jan/14 ]

Can you upload the log files from the config server which is showing the exception since just before the time of the san outage?

Comment by igor lasic [ 20/Jan/14 ]

previous log was from config servers. This is what mongos say:

Mon Jan 20 16:57:29.728 [Balancer] SyncClusterConnection connecting to [mongo-c02:27019]
Mon Jan 20 16:57:29.733 [Balancer] SyncClusterConnection connecting to [mongo-c03:27019]
Mon Jan 20 16:57:54.128 [LockPinger] scoped connection to mongo-c01:27019,mongo-c02:27019,mongo-c03:27019 not being returned to the pool
Mon Jan 20 16:57:54.128 [LockPinger] warning: distributed lock pinger 'mongo-c01:27019,mongo-c02:27019,mongo-c03:27019/render-mu05.colo:27017:1390142128:1804289383' detected an exception while pinging. :: caused by :: update not consistent ns: config.lockpings query:

{ _id: "render-mu05.colo:27017:1390142128:1804289383" }

update: { $set:

{ ping: new Date(1390255073928) }

} gle1:

{ err: "!loc.isNull()", n: 0, connectionId: 101, waited: 30, ok: 1.0 }

gle2:

{ updatedExisting: true, n: 1, lastOp: Timestamp 1390255074000|1, connectionId: 100, waited: 29, err: null, ok: 1.0 }

Mon Jan 20 16:57:59.778 [Balancer] SyncClusterConnection connecting to [mongo-c01:27019]
Mon Jan 20 16:57:59.779 [Balancer] SyncClusterConnection connecting to [mongo-c02:27019]
Mon Jan 20 16:57:59.781 [Balancer] SyncClusterConnection connecting to [mongo-c03:27019]
render-mu01.colo:
Mon Jan 20 16:57:07.562 [Balancer] SyncClusterConnection connecting to [mongo-c01:27019]
Mon Jan 20 16:57:07.563 [Balancer] SyncClusterConnection connecting to [mongo-c02:27019]
Mon Jan 20 16:57:07.564 [Balancer] SyncClusterConnection connecting to [mongo-c03:27019]
Mon Jan 20 16:57:33.205 [LockPinger] scoped connection to mongo-c01:27019,mongo-c02:27019,mongo-c03:27019 not being returned to the pool
Mon Jan 20 16:57:33.205 [LockPinger] warning: distributed lock pinger 'mongo-c01:27019,mongo-c02:27019,mongo-c03:27019/render-mu01.colo:27017:1390145207:1804289383' detected an exception while pinging. :: caused by :: update not consistent ns: config.lockpings query:

{ _id: "render-mu01.colo:27017:1390145207:1804289383" }

update: { $set:

{ ping: new Date(1390255053085) }

} gle1:

{ err: "!loc.isNull()", n: 0, connectionId: 81, waited: 28, ok: 1.0 }

gle2:

{ updatedExisting: true, n: 1, lastOp: Timestamp 1390255053000|1, connectionId: 80, waited: 2, err: null, ok: 1.0 }

Mon Jan 20 16:57:37.616 [Balancer] SyncClusterConnection connecting to [mongo-c01:27019]
Mon Jan 20 16:57:37.617 [Balancer] SyncClusterConnection connecting to [mongo-c02:27019]
Mon Jan 20 16:57:37.618 [Balancer] SyncClusterConnection connecting to [mongo-c03:27019]
Mon Jan 20 16:58:03.351 [LockPinger] scoped connection to mongo-c01:27019,mongo-c02:27019,mongo-c03:27019 not being returned to the pool
Mon Jan 20 16:58:03.351 [LockPinger] warning: distributed lock pinger 'mongo-c01:27019,mongo-c02:27019,mongo-c03:27019/render-mu01.colo:27017:1390145207:1804289383' detected an exception while pinging. :: caused by :: update not consistent ns: config.lockpings query:

{ _id: "render-mu01.colo:27017:1390145207:1804289383" }

update: { $set:

{ ping: new Date(1390255083205) }

} gle1:

{ err: "!loc.isNull()", n: 0, connectionId: 102, waited: 33, ok: 1.0 }

gle2:

{ updatedExisting: true, n: 1, lastOp: Timestamp 1390255083000|1, connectionId: 101, waited: 27, err: null, ok: 1.0 }

render-mu04.colo:
Mon Jan 20 16:57:21.970 [LockPinger] scoped connection to mongo-c01:27019,mongo-c02:27019,mongo-c03:27019 not being returned to the pool
Mon Jan 20 16:57:21.970 [LockPinger] warning: distributed lock pinger 'mongo-c01:27019,mongo-c02:27019,mongo-c03:27019/render-mu04.colo:27017:1390142128:1804289383' detected an exception while pinging. :: caused by :: update not consistent ns: config.lockpings query:

{ _id: "render-mu04.colo:27017:1390142128:1804289383" }

update: { $set:

{ ping: new Date(1390255041851) }

} gle1:

{ err: "!loc.isNull()", n: 0, connectionId: 79, waited: 16, ok: 1.0 }

gle2:

{ updatedExisting: true, n: 1, lastOp: Timestamp 1390255041000|1, connectionId: 78, waited: 16, err: null, ok: 1.0 }

Mon Jan 20 16:57:26.336 [Balancer] SyncClusterConnection connecting to [mongo-c01:27019]
Mon Jan 20 16:57:26.338 [Balancer] SyncClusterConnection connecting to [mongo-c02:27019]
Mon Jan 20 16:57:26.339 [Balancer] SyncClusterConnection connecting to [mongo-c03:27019]
Mon Jan 20 16:57:52.169 [LockPinger] scoped connection to mongo-c01:27019,mongo-c02:27019,mongo-c03:27019 not being returned to the pool
Mon Jan 20 16:57:52.169 [LockPinger] warning: distributed lock pinger 'mongo-c01:27019,mongo-c02:27019,mongo-c03:27019/render-mu04.colo:27017:1390142128:1804289383' detected an exception while pinging. :: caused by :: update not consistent ns: config.lockpings query:

{ _id: "render-mu04.colo:27017:1390142128:1804289383" }

update: { $set:

{ ping: new Date(1390255071970) }

} gle1:

{ err: "!loc.isNull()", n: 0, connectionId: 99, waited: 34, ok: 1.0 }

gle2:

{ updatedExisting: true, n: 1, lastOp: Timestamp 1390255072000|1, connectionId: 98, waited: 25, err: null, ok: 1.0 }

Mon Jan 20 16:57:56.392 [Balancer] SyncClusterConnection connecting to [mongo-c01:27019]
Mon Jan 20 16:57:56.393 [Balancer] SyncClusterConnection connecting to [mongo-c02:27019]
Mon Jan 20 16:57:56.394 [Balancer] SyncClusterConnection connecting to [mongo-c03:27019]
render-mu02.colo:
Mon Jan 20 16:57:24.578 [LockPinger] scoped connection to mongo-c01:27019,mongo-c02:27019,mongo-c03:27019 not being returned to the pool
Mon Jan 20 16:57:24.578 [LockPinger] warning: distributed lock pinger 'mongo-c01:27019,mongo-c02:27019,mongo-c03:27019/render-mu02.colo:27017:1390142130:1804289383' detected an exception while pinging. :: caused by :: update not consistent ns: config.lockpings query:

{ _id: "render-mu02.colo:27017:1390142130:1804289383" }

update: { $set:

{ ping: new Date(1390255044423) }

} gle1:

{ err: "!loc.isNull()", n: 0, connectionId: 80, waited: 10, ok: 1.0 }

gle2:

{ updatedExisting: true, n: 1, lastOp: Timestamp 1390255044000|1, connectionId: 79, waited: 19, err: null, ok: 1.0 }

Mon Jan 20 16:57:26.753 [Balancer] SyncClusterConnection connecting to [mongo-c01:27019]
Mon Jan 20 16:57:26.754 [Balancer] SyncClusterConnection connecting to [mongo-c02:27019]

Generated at Thu Feb 08 03:28:29 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.