[SERVER-50242] slow query message seen in mongos for ismaster Created: 11/Aug/20 Updated: 29/Oct/23 Resolved: 25/Aug/20 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | 4.7.0, 4.4.2 |
| Type: | Bug | Priority: | Minor - P4 |
| Reporter: | Kin Wai Cheung | Assignee: | A. Jesse Jiryu Davis |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Attachments: |
|
||||||||||||||||
| Issue Links: |
|
||||||||||||||||
| Backwards Compatibility: | Fully Compatible | ||||||||||||||||
| Backport Requested: |
v4.4
|
||||||||||||||||
| Sprint: | Repl 2020-09-07 | ||||||||||||||||
| Participants: | |||||||||||||||||
| Description |
|
In our mongos logs we see every xx seconds a 'slow query' message executing an admin command for ismaster. Is this a message we can ignore or normal behaviour?
I see similar Jira request and it seems this is already fixed? |
| Comments |
| Comment by A. Jesse Jiryu Davis [ 31/Jan/22 ] |
|
aljursav@gmail.com that looks like a slow insert command, not a slow isMaster command. For help debugging a slow insert, please use one of our tech support options: https://docs.mongodb.com/manual/support/ |
| Comment by Alex Savelev [ 31/Jan/22 ] |
|
Good day! ,"s":"I", "c":"COMMAND", "id":51803, "ctx":"conn11144","msg":"Slow query","attr":{"type":"command","ns":"Viss_DAIRM2.auditLogEntries","command":{"insert":"auditLogEntries","ordered":true,"bypassDocumentValidation":false,"writeConcern": {"w":"majority","j":true},"txnNumber":27,"$db":"Viss_DAIRM2","lsid":{"id":{"$uuid":"3b5d4fa2-fa00-4c5b-afd3-0b0a0afb0191"}},"$clusterTime":{"clusterTime":{"$timestamp":{"t":1643630891,"i":6}},"signature":{"hash":{"$binary":{"base64":"AAAAAAAAAAAAAAAAAAAAAAAAAAA=","subType":"0"}},"keyId":0}}},"ninserted":1,"keysInserted":3,"numYields":0,"reslen":230,"locks":{"ParallelBatchWriterMode":{"acquireCount":{"r":2}},"ReplicationStateTransition":{"acquireCount":{"w":3}},"Global":{"acquireCount":{"w":2}},"Database":{"acquireCount":{"w":2}},"Collection":{"acquireCount":{"w":2}},"Mutex":{"acquireCount": {"r":2}}},"flowControl":{"acquireCount":1,"timeAcquiringMicros":1},"writeConcern":{"w":"majority","j":true,"wtimeout":0,"provenance":"clientSupplied"},"storage":{"data":{"bytesRead":8461,"timeReadingMicros":46369}},"protocol":"op_msg","durationMillis":100}} Maybe there is some solution to this problem? |
| Comment by A. Jesse Jiryu Davis [ 02/Dec/20 ] |
|
Thanks, we'll track the followup work to really fix this in |
| Comment by Jussi Kauppila [ 02/Dec/20 ] |
|
Hi, thanks for the quick confirmation about performance. We can safely continue upgrading our Atlas clusters to 4.4. This just a inconvenience then in the Atlas Real Time View or the replica logs. We are passing the "heartbeatFrequencyMS": 40000 when creating the connection with pymongo (3.11.0, upgrading to 3.11.1 in tomorrow.). So heartbeatFrequencyMS its our custom settings. We have a lot of connections from different processes that it felt good idea to make the heartbeat a bit less frequent.
|
| Comment by A. Jesse Jiryu Davis [ 02/Dec/20 ] |
|
Hello, which driver are you using please? Can you confirm that you're passing heartbeatFrequencyMS=40000 to your MongoClient constructor? I'm curious why you're overriding the default heartbeatFrequencyMS, which is 10000. Your answer will help me design a good solution to this logging problem. To answer your question: these slow query logs do not affect cluster performance, you're correct. |
| Comment by Jussi Kauppila [ 02/Dec/20 ] |
|
I can still see this with mongo 4.2.2 version in my local version. Also Mongo Atlas is currently running 4.4.1 so all the primary logs are filled with this in cluster real time view. Hoping that you will deploy 4.4.2 atlas soon, but maybe that doens't fix it. Maybe a confirmation would be nice that this is only a logging issue and these "slow query" logs don't affect the cluster performance?
mongo01 | {"t": {"$date":"2020-12-02T08:25:35.568+00:00"},"s":"I", "c":"COMMAND", "id":51803, "ctx":"conn1","msg":"Slow query","attr":{"type":"command","ns":"admin.$cmd","appName":"e07c1b063449","command":{"ismaster":1,"topologyVersion":{"processId": {"$oid":"5fc74e09009906741b49b49f"},"counter":0},"maxAwaitTimeMS":40000,"$db":"admin","$readPreference":{"mode":"primary"}},"numYields":0,"reslen":304,"locks":{},"protocol":"op_msg","durationMillis":40000}} |
| Comment by Githook User [ 09/Sep/20 ] |
|
Author: {'name': 'A. Jesse Jiryu Davis', 'email': 'jesse@mongodb.com', 'username': 'ajdavis'}Message: (cherry picked from commit edfe9d2737656626b77543047215c4dfb9d0b23f) |
| Comment by Githook User [ 25/Aug/20 ] |
|
Author: {'name': 'A. Jesse Jiryu Davis', 'email': 'jesse@mongodb.com', 'username': 'ajdavis'}Message: |
| Comment by Dmitry Agranat [ 12/Aug/20 ] |
|
Hi kinwai.cheung@clarivate.com, I've been pointed out that the log you've provided is for mongoS while |
| Comment by Kin Wai Cheung [ 12/Aug/20 ] |
|
we are still seeing this message on 4.4.0 |
| Comment by Dmitry Agranat [ 12/Aug/20 ] |
|
Hi kinwai.cheung@clarivate.com, this is indeed |
| Comment by Kin Wai Cheung [ 11/Aug/20 ] |
|
running a sharded cluster on 4.4.0 |