Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-8654

yielding too much?

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Major - P3
    • Resolution: Gone away
    • 2.4.0-rc0
    • None
    • Concurrency
    • None
    • Fully Compatible

    Description

      It appears that we are yielding quite a bit in receivedQuery and receivedGetMore when a secondary is pulling from the oplog. In the scenario shown below the secondary was catching up. The output below is from when it was scanning before reaching a { ts :

      { $gte : ... }

      } point; after that it is yielding except maybe every 3ms instead of every 1ms.

      This test was done on windows.

      An extra 1K local db lock acquisitions per second can't be helpful...

      Thu Feb 21 14:01:01.264 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.265 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.266 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.267 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.268 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.269 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.270 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.271 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.272 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.273 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.274 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.275 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.276 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.277 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.278 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.279 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.280 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.281 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.282 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.283 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.284 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.285 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.286 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.287 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.288 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      Thu Feb 21 14:01:01.289 [conn1] yield haveReadLock:1 micros:-1 rec:0000000000000000
      

      More info:

      • ElapsedTracker::intervalHasElapsed: _hitsBetweenMarks == 256 during initial scan, 128 later when returning documents.
      • server was otherwise idle, so no one was queued and waiting for a lock
      • here is some profiler output. note that while this # is high, exclusive millis was sometimes quite low as getmore is very fast compared to the secondary applying the ops.

      initial scan:

      queryWithQueryOptimizer() 100%
      FindingStartCurosr::next() 89%
      staticYield 8%

      getmore:

      receivedGetMore() 97%
      ClientCursor::yieldSometimes() 33%

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              dwight@mongodb.com Dwight Merriman
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: