[SERVER-50242] slow query message seen in mongos for ismaster Created: 11/Aug/20  Updated: 29/Oct/23  Resolved: 25/Aug/20

Status: Closed
Project: Core Server
Component/s: None
Affects Version/s: None
Fix Version/s: 4.7.0, 4.4.2

Type: Bug Priority: Minor - P4
Reporter: Kin Wai Cheung Assignee: A. Jesse Jiryu Davis
Resolution: Fixed Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: File mongos.log-20200811.gz    
Issue Links:
Backports
Related
is related to SERVER-47267 Exhaust isMaster replies are logged a... Closed
is related to SERVER-53197 Awaitable hello/isMaster appears in s... Closed
Backwards Compatibility: Fully Compatible
Backport Requested:
v4.4
Sprint: Repl 2020-09-07
Participants:

 Description   

 

In our mongos logs we see every xx seconds a 'slow query' message executing an admin command for ismaster. Is this a message we can ignore or normal behaviour?

{"t":{"$date":"2020-08-11T10:03:17.122+02:00"},"s":"I", "c":"SHARDING", "id":20997, "ctx":"Uptime-reporter","msg":"Refreshed RWC defaults","attr":{"newDefaults":{}}}
{"t":{"$date":"2020-08-11T10:03:22.893+02:00"},"s":"I", "c":"COMMAND", "id":51803, "ctx":"conn171","msg":"Slow query","attr":{"type":"command","ns":"admin.$cmd","command":{"ismaster":1,"topologyVersion":{"processId":{"$oid":"5f310922c0b272dc8039c2e7"},"counter":0},"maxAwaitTimesMS":10000,"$db":"admin"},"numYields":0,"reslen":436,"protocol":"op_msg","durationMillis":10010}}
{"t":{"$date":"2020-08-11T10:03:24.617+02:00"},"s":"I", "c":"COMMAND", "id":51803, "ctx":"conn1047","msg":"Slow query","attr":{"type":"command","ns":"admin.$cmd","command":{"ismaster":1,"topologyVersion":{"processId":{"$oid":"5f310922c0b272dc8039c2e7"},"counter":0},"maxAwaitTimeMS":10000,"$db":"admin"},"numYields":0,"reslen":436,"protocol":"op_msg","durationMillis":10010}}
{"t":{"$date":"2020-08-11T10:03:27.127+02:00"},"s":"I", "c":"SHARDING", "id":20997, "ctx":"Uptime-reporter","msg":"Refreshed RWC defaults","attr":{"newDefaults":{}}}
{"t":{"$date":"2020-08-11T10:03:32.903+02:00"},"s":"I", "c":"COMMAND", "id":51803, "ctx":"conn171","msg":"Slow query","attr":{"type":"command","ns":"admin.$cmd","command":{"ismaster":1,"topologyVersion":{"processId":{"$oid":"5f310922c0b272dc8039c2e7"},"counter":0},"maxAwaitTimeMS":10000,"$db":"admin"},"numYields":0,"reslen":436,"protocol":"op_msg","durationMillis":10010}}
{"t":{"$date":"2020-08-11T10:03:34.628+02:00"},"s":"I", "c":"COMMAND", "id":51803, "ctx":"conn1047","msg":"Slow query","attr":{"type":"command","ns":"admin.$cmd","command":{"ismaster":1,"topologyVersion":{"processId":{"$oid":"5f310922c0b272dc8039c2e7"},"counter":0},"maxAwaitTimeMS":10000,"$db":"admin"},"numYields":0,"reslen":436,"protocol":"op_msg","durationMillis":10010}}
 

I see similar Jira request SERVER-47267

and it seems this is already fixed?



 Comments   
Comment by A. Jesse Jiryu Davis [ 31/Jan/22 ]

aljursav@gmail.com that looks like a slow insert command, not a slow isMaster command. For help debugging a slow insert, please use one of our tech support options: https://docs.mongodb.com/manual/support/

Comment by Alex Savelev [ 31/Jan/22 ]

Good day!
Faced the same problem in version 4.4.8.
Logs:

{"t":\{"$date":"2022-01-31T14:08:13.591+02:00"}

,"s":"I", "c":"COMMAND", "id":51803, "ctx":"conn11144","msg":"Slow query","attr":{"type":"command","ns":"Viss_DAIRM2.auditLogEntries","command":{"insert":"auditLogEntries","ordered":true,"bypassDocumentValidation":false,"writeConcern":

{"w":"majority","j":true}

,"txnNumber":27,"$db":"Viss_DAIRM2","lsid":{"id":{"$uuid":"3b5d4fa2-fa00-4c5b-afd3-0b0a0afb0191"}},"$clusterTime":{"clusterTime":{"$timestamp":{"t":1643630891,"i":6}},"signature":{"hash":{"$binary":{"base64":"AAAAAAAAAAAAAAAAAAAAAAAAAAA=","subType":"0"}},"keyId":0}}},"ninserted":1,"keysInserted":3,"numYields":0,"reslen":230,"locks":{"ParallelBatchWriterMode":{"acquireCount":{"r":2}},"ReplicationStateTransition":{"acquireCount":{"w":3}},"Global":{"acquireCount":{"w":2}},"Database":{"acquireCount":{"w":2}},"Collection":{"acquireCount":{"w":2}},"Mutex":{"acquireCount":

{"r":2}

}},"flowControl":{"acquireCount":1,"timeAcquiringMicros":1},"writeConcern":{"w":"majority","j":true,"wtimeout":0,"provenance":"clientSupplied"},"storage":{"data":{"bytesRead":8461,"timeReadingMicros":46369}},"protocol":"op_msg","durationMillis":100}}

Maybe there is some solution to this problem?
Thanks!

Comment by A. Jesse Jiryu Davis [ 02/Dec/20 ]

Thanks, we'll track the followup work to really fix this in SERVER-53197, you can add yourself as a watcher on that ticket for updates.

Comment by Jussi Kauppila [ 02/Dec/20 ]

Hi, thanks for the quick confirmation about performance. We can safely continue upgrading our Atlas clusters to 4.4. This just a inconvenience then in the Atlas Real Time View or the replica logs.

We are passing the "heartbeatFrequencyMS": 40000 when creating the connection with pymongo (3.11.0, upgrading to 3.11.1 in tomorrow.). So heartbeatFrequencyMS its our custom settings. We have a lot of  connections from different processes that it felt good idea to make the heartbeat a bit less frequent.

 

 

Comment by A. Jesse Jiryu Davis [ 02/Dec/20 ]

Hello, which driver are you using please? Can you confirm that you're passing heartbeatFrequencyMS=40000 to your MongoClient constructor? I'm curious why you're overriding the default heartbeatFrequencyMS, which is 10000. Your answer will help me design a good solution to this logging problem.

To answer your question: these slow query logs do not affect cluster performance, you're correct.

Comment by Jussi Kauppila [ 02/Dec/20 ]

I can still see this with mongo 4.2.2 version in my local version. Also Mongo Atlas is currently running 4.4.1 so all the primary logs are filled with this in cluster real time view. Hoping that you will deploy 4.4.2 atlas soon, but maybe that doens't fix it.

Maybe a confirmation would be nice that this is only a logging issue and these "slow query" logs don't affect the cluster performance?

 

mongo01 | {"t":

{"$date":"2020-12-02T08:25:35.568+00:00"}

,"s":"I", "c":"COMMAND", "id":51803, "ctx":"conn1","msg":"Slow query","attr":{"type":"command","ns":"admin.$cmd","appName":"e07c1b063449","command":{"ismaster":1,"topologyVersion":{"processId":

{"$oid":"5fc74e09009906741b49b49f"}

,"counter":0},"maxAwaitTimeMS":40000,"$db":"admin","$readPreference":{"mode":"primary"}},"numYields":0,"reslen":304,"locks":{},"protocol":"op_msg","durationMillis":40000}}

Comment by Githook User [ 09/Sep/20 ]

Author:

{'name': 'A. Jesse Jiryu Davis', 'email': 'jesse@mongodb.com', 'username': 'ajdavis'}

Message: SERVER-50242 Don't log awaitable "hello" as slow query

(cherry picked from commit edfe9d2737656626b77543047215c4dfb9d0b23f)
Branch: v4.4
https://github.com/mongodb/mongo/commit/dad759d622444c0a4e69ef2f9194c9d4ce1bfd62

Comment by Githook User [ 25/Aug/20 ]

Author:

{'name': 'A. Jesse Jiryu Davis', 'email': 'jesse@mongodb.com', 'username': 'ajdavis'}

Message: SERVER-50242 Don't log awaitable "hello" as slow query
Branch: master
https://github.com/mongodb/mongo/commit/edfe9d2737656626b77543047215c4dfb9d0b23f

Comment by Dmitry Agranat [ 12/Aug/20 ]

Hi kinwai.cheung@clarivate.com, I've been pointed out that the log you've provided is for mongoS while SERVER-47267 only addressed mongoD log. I've reopened this ticket and will update you on the progress.

Comment by Kin Wai Cheung [ 12/Aug/20 ]

we are still seeing this message on 4.4.0

Comment by Dmitry Agranat [ 12/Aug/20 ]

Hi kinwai.cheung@clarivate.com, this is indeed SERVER-47267 and was fixed in 4.4.0-rc4. I will go ahead and close it as duplicate. Let us know if you still see this in versions greater than 4.4.0.

Comment by Kin Wai Cheung [ 11/Aug/20 ]

running a sharded cluster  on 4.4.0

Generated at Thu Feb 08 05:22:09 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.