[SERVER-43975] in seconday appear global lock issue the find from secondary very slowly Created: 12/Oct/19  Updated: 18/Nov/19  Resolved: 18/Nov/19

Status: Closed
Project: Core Server
Component/s: Querying, Replication, WiredTiger
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: hancang2000 Assignee: Dmitry Agranat
Resolution: Incomplete Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: File diagnostic.tar.gz    
Issue Links:
Duplicate
is duplicated by SERVER-43976 send query in the secondary to query ... Closed
Backwards Compatibility: Fully Compatible
Operating System: ALL
Participants:

 Description   

hi

    we have a problem when we send query same litter data collections in secondary always appear same very slowly query problems ,every day about one or two times.in the slowly logs we could find the timeAcquiringMicros take the longest time in the query.i think if always appear this problem .maybe we can't use secondary as one of the query point for share the responsibility of primary .because this will block  the query too long time.and let our program became dead.

 

the log like this 

2019-10-12T07:34:50.684+0800 I COMMAND [conn1279777] command gacc.r_plugin command: find \{ find: "r_plugin", filter: { type: "h3c" }, limit: 1, shardVersion: [ Timestamp 0|0, ObjectId('000000000000000000000000') ] } planSummary: IXSCAN \{ type: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:350 locks:\{ Global: { acquireCount: { r: 2 }, acquireWaitCount: \{ r: 1 }, timeAcquiringMicros: \{ r: 38560627 } }, Database: \{ acquireCount: { r: 1 } }, Collection: \{ acquireCount: { r: 1 } } } protocol:op_command 38560ms



 Comments   
Comment by Dmitry Agranat [ 18/Nov/19 ]

Hi,

We haven’t heard back from you for some time, so I’m going to mark this ticket as resolved. If this is still an issue for you, please provide additional information and we will reopen the ticket.

Regards,
Dima

Comment by Dmitry Agranat [ 30/Oct/19 ]

Hi, I was not able to find any issue during this time, Oct 27th 22:40-23:00 UTC, could you post the MongoDB operation from this node showing the reported issue? Alternatively, you can upload all the data to this support uploader location.

Comment by hancang2000 [ 30/Oct/19 ]

hi

   Dima the time about 22:40-23:00 

Comment by Dmitry Agranat [ 30/Oct/19 ]

Hi zhouhancang,

The data you've uploaded covers Oct 27th-28th 12:00 UTC, could you point me to the timestamp of the reported event?

Thanks,
Dima

Comment by hancang2000 [ 30/Oct/19 ]

diagnostic.tar.gz

 

hi

  Dima i had upsed my diagnostic file

Comment by Dmitry Agranat [ 29/Oct/19 ]

Hi zhouhancang,

We still need additional information to diagnose the problem. If this is still an issue for you, would you please upload the requested data?

Thanks,
Dima

Comment by Dmitry Agranat [ 15/Oct/19 ]

Hi zhouhancang,

Would you please archive (tar or zip) the mongod.log files and the $dbpath/diagnostic.data directory (the contents are described here) and upload them to this support uploader location?

Files uploaded to this portal are visible only to MongoDB employees and are routinely deleted after some time.

Thanks,
Dima

Comment by hancang2000 [ 12/Oct/19 ]

2019-10-12T07:34:50.684+0800 I COMMAND [conn1279777] command gacc.r_plugin command: find \{ find: "r_plugin", filter:{ type: "h3c" }, limit: 1, shardVersion: [ Timestamp 0|0, ObjectId('000000000000000000000000') ] } planSummary: IXSCAN \{ type: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:350 locks:\{ Global: { acquireCount:{ r: 2 }, acquireWaitCount: \{ r: 1 }, timeAcquiringMicros: \{ r: 38560627 } }, Database: \{ acquireCount:{ r: 1 }}, Collection: \{ acquireCount:{ r: 1 }} } protocol:op_command 38560ms

Generated at Thu Feb 08 05:04:39 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.